00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 1910 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3171 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.042 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.063 Fetching changes from the remote Git repository 00:00:00.065 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.171 > git --version # 'git version 2.39.2' 00:00:00.171 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.241 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.241 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.285 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.298 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.310 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:04.310 > git config core.sparsecheckout # timeout=10 00:00:04.323 > git read-tree -mu HEAD # timeout=10 00:00:04.342 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:04.364 Commit message: "pool: fixes for VisualBuild class" 00:00:04.364 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:04.481 [Pipeline] Start of Pipeline 00:00:04.493 [Pipeline] library 00:00:04.495 Loading library shm_lib@master 00:00:04.495 Library shm_lib@master is cached. Copying from home. 00:00:04.508 [Pipeline] node 00:00:19.510 Still waiting to schedule task 00:00:19.510 ‘FCP03’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘FCP04’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘FCP07’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘FCP08’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘FCP09’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘FCP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘FCP11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘FCP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘GP11’ is offline 00:00:19.510 ‘GP12’ is offline 00:00:19.510 ‘GP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘GP14’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘GP16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.510 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘GP1’ is offline 00:00:19.511 ‘GP20’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘GP21’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘GP22’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘GP24’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘GP2’ is offline 00:00:19.511 ‘GP3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘GP4’ is offline 00:00:19.511 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘GP6’ is offline 00:00:19.511 ‘GP8’ is offline 00:00:19.511 ‘GP9’ is offline 00:00:19.511 ‘ImageBuilder1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘ME2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘ME3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-PE1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-PE3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-SM0’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-SM16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-SM17’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-SM18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-SM4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-SM9’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-WFP1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘VM-host-WFP7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘WCP0’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘WCP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘WCP4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘WFP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘WFP17’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘WFP21’ is offline 00:00:19.511 ‘WFP23’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘WFP29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.511 ‘WFP32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP36’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP37’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP38’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP41’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP42’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP49’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP50’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP53’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP63’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP65’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP66’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP67’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP68’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP69’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘WFP6’ is offline 00:00:19.512 ‘WFP8’ is offline 00:00:19.512 ‘WFP9’ is offline 00:00:19.512 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.512 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:55.318 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:55.320 [Pipeline] { 00:00:55.335 [Pipeline] catchError 00:00:55.337 [Pipeline] { 00:00:55.354 [Pipeline] wrap 00:00:55.365 [Pipeline] { 00:00:55.376 [Pipeline] stage 00:00:55.378 [Pipeline] { (Prologue) 00:00:55.579 [Pipeline] sh 00:00:55.866 + logger -p user.info -t JENKINS-CI 00:00:55.889 [Pipeline] echo 00:00:55.891 Node: CYP12 00:00:55.901 [Pipeline] sh 00:00:56.207 [Pipeline] setCustomBuildProperty 00:00:56.220 [Pipeline] echo 00:00:56.222 Cleanup processes 00:00:56.227 [Pipeline] sh 00:00:56.514 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.514 1626080 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.530 [Pipeline] sh 00:00:56.817 ++ grep -v 'sudo pgrep' 00:00:56.817 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.817 ++ awk '{print $1}' 00:00:56.817 + sudo kill -9 00:00:56.817 + true 00:00:56.833 [Pipeline] cleanWs 00:00:56.844 [WS-CLEANUP] Deleting project workspace... 00:00:56.844 [WS-CLEANUP] Deferred wipeout is used... 00:00:56.852 [WS-CLEANUP] done 00:00:56.856 [Pipeline] setCustomBuildProperty 00:00:56.872 [Pipeline] sh 00:00:57.158 + sudo git config --global --replace-all safe.directory '*' 00:00:57.235 [Pipeline] nodesByLabel 00:00:57.237 Found a total of 2 nodes with the 'sorcerer' label 00:00:57.248 [Pipeline] httpRequest 00:00:57.254 HttpMethod: GET 00:00:57.254 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:57.257 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:57.275 Response Code: HTTP/1.1 200 OK 00:00:57.275 Success: Status code 200 is in the accepted range: 200,404 00:00:57.276 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:01:00.081 [Pipeline] sh 00:01:00.368 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:01:00.387 [Pipeline] httpRequest 00:01:00.393 HttpMethod: GET 00:01:00.393 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:00.394 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:00.403 Response Code: HTTP/1.1 200 OK 00:01:00.404 Success: Status code 200 is in the accepted range: 200,404 00:01:00.404 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:20.513 [Pipeline] sh 00:01:20.801 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:23.362 [Pipeline] sh 00:01:23.649 + git -C spdk log --oneline -n5 00:01:23.649 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:23.649 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:01:23.649 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:23.649 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:01:23.649 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:01:23.663 [Pipeline] } 00:01:23.680 [Pipeline] // stage 00:01:23.690 [Pipeline] stage 00:01:23.692 [Pipeline] { (Prepare) 00:01:23.711 [Pipeline] writeFile 00:01:23.729 [Pipeline] sh 00:01:24.017 + logger -p user.info -t JENKINS-CI 00:01:24.030 [Pipeline] sh 00:01:24.348 + logger -p user.info -t JENKINS-CI 00:01:24.362 [Pipeline] sh 00:01:24.649 + cat autorun-spdk.conf 00:01:24.649 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.649 SPDK_TEST_NVMF=1 00:01:24.649 SPDK_TEST_NVME_CLI=1 00:01:24.649 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.649 SPDK_TEST_NVMF_NICS=e810 00:01:24.649 SPDK_RUN_UBSAN=1 00:01:24.649 NET_TYPE=phy 00:01:24.659 RUN_NIGHTLY=1 00:01:24.669 [Pipeline] readFile 00:01:24.696 [Pipeline] withEnv 00:01:24.698 [Pipeline] { 00:01:24.713 [Pipeline] sh 00:01:25.001 + set -ex 00:01:25.001 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:25.001 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.001 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.001 ++ SPDK_TEST_NVMF=1 00:01:25.001 ++ SPDK_TEST_NVME_CLI=1 00:01:25.001 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.001 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.001 ++ SPDK_RUN_UBSAN=1 00:01:25.001 ++ NET_TYPE=phy 00:01:25.001 ++ RUN_NIGHTLY=1 00:01:25.001 + case $SPDK_TEST_NVMF_NICS in 00:01:25.001 + DRIVERS=ice 00:01:25.001 + [[ tcp == \r\d\m\a ]] 00:01:25.001 + [[ -n ice ]] 00:01:25.001 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:25.001 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:35.004 rmmod: ERROR: Module irdma is not currently loaded 00:01:35.004 rmmod: ERROR: Module i40iw is not currently loaded 00:01:35.004 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:35.004 + true 00:01:35.004 + for D in $DRIVERS 00:01:35.004 + sudo modprobe ice 00:01:35.004 + exit 0 00:01:35.014 [Pipeline] } 00:01:35.034 [Pipeline] // withEnv 00:01:35.040 [Pipeline] } 00:01:35.059 [Pipeline] // stage 00:01:35.070 [Pipeline] catchError 00:01:35.072 [Pipeline] { 00:01:35.088 [Pipeline] timeout 00:01:35.088 Timeout set to expire in 50 min 00:01:35.090 [Pipeline] { 00:01:35.108 [Pipeline] stage 00:01:35.110 [Pipeline] { (Tests) 00:01:35.128 [Pipeline] sh 00:01:35.416 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.416 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.416 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.416 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:35.416 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.416 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.416 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:35.416 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.416 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.416 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.416 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:35.416 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.416 + source /etc/os-release 00:01:35.417 ++ NAME='Fedora Linux' 00:01:35.417 ++ VERSION='38 (Cloud Edition)' 00:01:35.417 ++ ID=fedora 00:01:35.417 ++ VERSION_ID=38 00:01:35.417 ++ VERSION_CODENAME= 00:01:35.417 ++ PLATFORM_ID=platform:f38 00:01:35.417 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:35.417 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:35.417 ++ LOGO=fedora-logo-icon 00:01:35.417 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:35.417 ++ HOME_URL=https://fedoraproject.org/ 00:01:35.417 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:35.417 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:35.417 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:35.417 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:35.417 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:35.417 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:35.417 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:35.417 ++ SUPPORT_END=2024-05-14 00:01:35.417 ++ VARIANT='Cloud Edition' 00:01:35.417 ++ VARIANT_ID=cloud 00:01:35.417 + uname -a 00:01:35.417 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:35.417 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:38.719 Hugepages 00:01:38.719 node hugesize free / total 00:01:38.719 node0 1048576kB 0 / 0 00:01:38.719 node0 2048kB 0 / 0 00:01:38.719 node1 1048576kB 0 / 0 00:01:38.719 node1 2048kB 0 / 0 00:01:38.719 00:01:38.719 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.719 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:38.719 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:38.719 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:38.719 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:38.719 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:38.719 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:38.719 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:38.720 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:38.720 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:38.720 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:38.720 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:38.720 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:38.720 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:38.720 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:38.720 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:38.720 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:38.720 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:38.720 + rm -f /tmp/spdk-ld-path 00:01:38.720 + source autorun-spdk.conf 00:01:38.720 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.720 ++ SPDK_TEST_NVMF=1 00:01:38.720 ++ SPDK_TEST_NVME_CLI=1 00:01:38.720 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.720 ++ SPDK_TEST_NVMF_NICS=e810 00:01:38.720 ++ SPDK_RUN_UBSAN=1 00:01:38.720 ++ NET_TYPE=phy 00:01:38.720 ++ RUN_NIGHTLY=1 00:01:38.720 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.720 + [[ -n '' ]] 00:01:38.720 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.720 + for M in /var/spdk/build-*-manifest.txt 00:01:38.720 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.720 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:38.720 + for M in /var/spdk/build-*-manifest.txt 00:01:38.720 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.720 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:38.720 ++ uname 00:01:38.720 + [[ Linux == \L\i\n\u\x ]] 00:01:38.720 + sudo dmesg -T 00:01:38.720 + sudo dmesg --clear 00:01:38.720 + dmesg_pid=1627649 00:01:38.720 + [[ Fedora Linux == FreeBSD ]] 00:01:38.720 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.720 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.720 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.720 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.720 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.720 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.720 + sudo dmesg -Tw 00:01:38.720 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.720 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.720 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.720 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.720 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.720 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.720 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.720 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.720 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:38.720 Test configuration: 00:01:38.720 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.720 SPDK_TEST_NVMF=1 00:01:38.720 SPDK_TEST_NVME_CLI=1 00:01:38.720 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.720 SPDK_TEST_NVMF_NICS=e810 00:01:38.720 SPDK_RUN_UBSAN=1 00:01:38.720 NET_TYPE=phy 00:01:38.720 RUN_NIGHTLY=1 11:39:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:38.720 11:39:32 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.720 11:39:32 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.720 11:39:32 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.720 11:39:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.720 11:39:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.720 11:39:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.720 11:39:32 -- paths/export.sh@5 -- $ export PATH 00:01:38.720 11:39:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.720 11:39:32 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:38.720 11:39:32 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:38.720 11:39:32 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718012372.XXXXXX 00:01:38.720 11:39:32 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718012372.STl5VR 00:01:38.720 11:39:32 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:38.720 11:39:32 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:38.720 11:39:32 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:38.720 11:39:32 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:38.720 11:39:32 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.720 11:39:32 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:38.720 11:39:32 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:38.720 11:39:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.720 11:39:32 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:38.720 11:39:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.720 11:39:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.720 11:39:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.720 11:39:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.720 Mon Jun 10 09:39:32 AM UTC 2024 00:01:38.720 11:39:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.720 LTS-43-g130b9406a 00:01:38.720 11:39:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:38.720 11:39:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.720 11:39:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.720 11:39:32 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:38.720 11:39:32 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:38.720 11:39:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.720 ************************************ 00:01:38.720 START TEST ubsan 00:01:38.720 ************************************ 00:01:38.720 11:39:32 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:38.720 using ubsan 00:01:38.720 00:01:38.720 real 0m0.000s 00:01:38.720 user 0m0.000s 00:01:38.720 sys 0m0.000s 00:01:38.720 11:39:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:38.720 11:39:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.720 ************************************ 00:01:38.720 END TEST ubsan 00:01:38.720 ************************************ 00:01:38.720 11:39:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.720 11:39:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.720 11:39:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.720 11:39:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.720 11:39:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.720 11:39:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.720 11:39:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.720 11:39:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.720 11:39:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:38.720 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:38.720 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.292 Using 'verbs' RDMA provider 00:01:52.097 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:07.006 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:07.006 Creating mk/config.mk...done. 00:02:07.006 Creating mk/cc.flags.mk...done. 00:02:07.006 Type 'make' to build. 00:02:07.006 11:39:59 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:07.006 11:39:59 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:07.006 11:39:59 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:07.006 11:39:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.006 ************************************ 00:02:07.006 START TEST make 00:02:07.006 ************************************ 00:02:07.006 11:39:59 -- common/autotest_common.sh@1104 -- $ make -j144 00:02:07.006 make[1]: Nothing to be done for 'all'. 00:02:15.146 The Meson build system 00:02:15.146 Version: 1.3.1 00:02:15.146 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:15.146 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:15.146 Build type: native build 00:02:15.146 Program cat found: YES (/usr/bin/cat) 00:02:15.146 Project name: DPDK 00:02:15.146 Project version: 23.11.0 00:02:15.146 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:15.146 C linker for the host machine: cc ld.bfd 2.39-16 00:02:15.146 Host machine cpu family: x86_64 00:02:15.146 Host machine cpu: x86_64 00:02:15.146 Message: ## Building in Developer Mode ## 00:02:15.146 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.146 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:15.146 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.146 Program python3 found: YES (/usr/bin/python3) 00:02:15.146 Program cat found: YES (/usr/bin/cat) 00:02:15.146 Compiler for C supports arguments -march=native: YES 00:02:15.146 Checking for size of "void *" : 8 00:02:15.146 Checking for size of "void *" : 8 (cached) 00:02:15.146 Library m found: YES 00:02:15.146 Library numa found: YES 00:02:15.146 Has header "numaif.h" : YES 00:02:15.146 Library fdt found: NO 00:02:15.146 Library execinfo found: NO 00:02:15.146 Has header "execinfo.h" : YES 00:02:15.146 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:15.146 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.146 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.146 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.146 Run-time dependency openssl found: YES 3.0.9 00:02:15.146 Run-time dependency libpcap found: YES 1.10.4 00:02:15.146 Has header "pcap.h" with dependency libpcap: YES 00:02:15.146 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.146 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.146 Compiler for C supports arguments -Wformat: YES 00:02:15.147 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.147 Compiler for C supports arguments -Wformat-security: NO 00:02:15.147 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.147 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.147 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.147 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.147 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.147 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.147 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.147 Compiler for C supports arguments -Wundef: YES 00:02:15.147 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.147 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.147 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.147 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.147 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.147 Program objdump found: YES (/usr/bin/objdump) 00:02:15.147 Compiler for C supports arguments -mavx512f: YES 00:02:15.147 Checking if "AVX512 checking" compiles: YES 00:02:15.147 Fetching value of define "__SSE4_2__" : 1 00:02:15.147 Fetching value of define "__AES__" : 1 00:02:15.147 Fetching value of define "__AVX__" : 1 00:02:15.147 Fetching value of define "__AVX2__" : 1 00:02:15.147 Fetching value of define "__AVX512BW__" : 1 00:02:15.147 Fetching value of define "__AVX512CD__" : 1 00:02:15.147 Fetching value of define "__AVX512DQ__" : 1 00:02:15.147 Fetching value of define "__AVX512F__" : 1 00:02:15.147 Fetching value of define "__AVX512VL__" : 1 00:02:15.147 Fetching value of define "__PCLMUL__" : 1 00:02:15.147 Fetching value of define "__RDRND__" : 1 00:02:15.147 Fetching value of define "__RDSEED__" : 1 00:02:15.147 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:15.147 Fetching value of define "__znver1__" : (undefined) 00:02:15.147 Fetching value of define "__znver2__" : (undefined) 00:02:15.147 Fetching value of define "__znver3__" : (undefined) 00:02:15.147 Fetching value of define "__znver4__" : (undefined) 00:02:15.147 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.147 Message: lib/log: Defining dependency "log" 00:02:15.147 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.147 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.147 Checking for function "getentropy" : NO 00:02:15.147 Message: lib/eal: Defining dependency "eal" 00:02:15.147 Message: lib/ring: Defining dependency "ring" 00:02:15.147 Message: lib/rcu: Defining dependency "rcu" 00:02:15.147 Message: lib/mempool: Defining dependency "mempool" 00:02:15.147 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.147 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.147 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.147 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.147 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.147 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:15.147 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:15.147 Compiler for C supports arguments -mpclmul: YES 00:02:15.147 Compiler for C supports arguments -maes: YES 00:02:15.147 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.147 Compiler for C supports arguments -mavx512bw: YES 00:02:15.147 Compiler for C supports arguments -mavx512dq: YES 00:02:15.147 Compiler for C supports arguments -mavx512vl: YES 00:02:15.147 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.147 Compiler for C supports arguments -mavx2: YES 00:02:15.147 Compiler for C supports arguments -mavx: YES 00:02:15.147 Message: lib/net: Defining dependency "net" 00:02:15.147 Message: lib/meter: Defining dependency "meter" 00:02:15.147 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.147 Message: lib/pci: Defining dependency "pci" 00:02:15.147 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.147 Message: lib/hash: Defining dependency "hash" 00:02:15.147 Message: lib/timer: Defining dependency "timer" 00:02:15.147 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.147 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.147 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.147 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.147 Message: lib/power: Defining dependency "power" 00:02:15.147 Message: lib/reorder: Defining dependency "reorder" 00:02:15.147 Message: lib/security: Defining dependency "security" 00:02:15.147 Has header "linux/userfaultfd.h" : YES 00:02:15.147 Has header "linux/vduse.h" : YES 00:02:15.147 Message: lib/vhost: Defining dependency "vhost" 00:02:15.147 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.147 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.147 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.147 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.147 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:15.147 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:15.147 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:15.147 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:15.147 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:15.147 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:15.147 Program doxygen found: YES (/usr/bin/doxygen) 00:02:15.147 Configuring doxy-api-html.conf using configuration 00:02:15.147 Configuring doxy-api-man.conf using configuration 00:02:15.147 Program mandb found: YES (/usr/bin/mandb) 00:02:15.147 Program sphinx-build found: NO 00:02:15.147 Configuring rte_build_config.h using configuration 00:02:15.147 Message: 00:02:15.147 ================= 00:02:15.147 Applications Enabled 00:02:15.147 ================= 00:02:15.147 00:02:15.147 apps: 00:02:15.147 00:02:15.147 00:02:15.147 Message: 00:02:15.147 ================= 00:02:15.147 Libraries Enabled 00:02:15.147 ================= 00:02:15.147 00:02:15.147 libs: 00:02:15.147 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:15.147 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:15.147 cryptodev, dmadev, power, reorder, security, vhost, 00:02:15.147 00:02:15.147 Message: 00:02:15.147 =============== 00:02:15.147 Drivers Enabled 00:02:15.147 =============== 00:02:15.147 00:02:15.147 common: 00:02:15.147 00:02:15.147 bus: 00:02:15.147 pci, vdev, 00:02:15.147 mempool: 00:02:15.147 ring, 00:02:15.147 dma: 00:02:15.147 00:02:15.147 net: 00:02:15.147 00:02:15.147 crypto: 00:02:15.147 00:02:15.147 compress: 00:02:15.147 00:02:15.147 vdpa: 00:02:15.147 00:02:15.147 00:02:15.147 Message: 00:02:15.147 ================= 00:02:15.147 Content Skipped 00:02:15.147 ================= 00:02:15.147 00:02:15.147 apps: 00:02:15.147 dumpcap: explicitly disabled via build config 00:02:15.147 graph: explicitly disabled via build config 00:02:15.147 pdump: explicitly disabled via build config 00:02:15.147 proc-info: explicitly disabled via build config 00:02:15.147 test-acl: explicitly disabled via build config 00:02:15.147 test-bbdev: explicitly disabled via build config 00:02:15.147 test-cmdline: explicitly disabled via build config 00:02:15.147 test-compress-perf: explicitly disabled via build config 00:02:15.147 test-crypto-perf: explicitly disabled via build config 00:02:15.147 test-dma-perf: explicitly disabled via build config 00:02:15.147 test-eventdev: explicitly disabled via build config 00:02:15.147 test-fib: explicitly disabled via build config 00:02:15.147 test-flow-perf: explicitly disabled via build config 00:02:15.147 test-gpudev: explicitly disabled via build config 00:02:15.147 test-mldev: explicitly disabled via build config 00:02:15.147 test-pipeline: explicitly disabled via build config 00:02:15.147 test-pmd: explicitly disabled via build config 00:02:15.147 test-regex: explicitly disabled via build config 00:02:15.147 test-sad: explicitly disabled via build config 00:02:15.147 test-security-perf: explicitly disabled via build config 00:02:15.147 00:02:15.147 libs: 00:02:15.147 metrics: explicitly disabled via build config 00:02:15.147 acl: explicitly disabled via build config 00:02:15.147 bbdev: explicitly disabled via build config 00:02:15.147 bitratestats: explicitly disabled via build config 00:02:15.147 bpf: explicitly disabled via build config 00:02:15.147 cfgfile: explicitly disabled via build config 00:02:15.147 distributor: explicitly disabled via build config 00:02:15.147 efd: explicitly disabled via build config 00:02:15.147 eventdev: explicitly disabled via build config 00:02:15.147 dispatcher: explicitly disabled via build config 00:02:15.147 gpudev: explicitly disabled via build config 00:02:15.147 gro: explicitly disabled via build config 00:02:15.147 gso: explicitly disabled via build config 00:02:15.147 ip_frag: explicitly disabled via build config 00:02:15.147 jobstats: explicitly disabled via build config 00:02:15.147 latencystats: explicitly disabled via build config 00:02:15.147 lpm: explicitly disabled via build config 00:02:15.147 member: explicitly disabled via build config 00:02:15.147 pcapng: explicitly disabled via build config 00:02:15.147 rawdev: explicitly disabled via build config 00:02:15.147 regexdev: explicitly disabled via build config 00:02:15.147 mldev: explicitly disabled via build config 00:02:15.147 rib: explicitly disabled via build config 00:02:15.147 sched: explicitly disabled via build config 00:02:15.147 stack: explicitly disabled via build config 00:02:15.147 ipsec: explicitly disabled via build config 00:02:15.147 pdcp: explicitly disabled via build config 00:02:15.147 fib: explicitly disabled via build config 00:02:15.147 port: explicitly disabled via build config 00:02:15.147 pdump: explicitly disabled via build config 00:02:15.147 table: explicitly disabled via build config 00:02:15.147 pipeline: explicitly disabled via build config 00:02:15.147 graph: explicitly disabled via build config 00:02:15.147 node: explicitly disabled via build config 00:02:15.147 00:02:15.147 drivers: 00:02:15.147 common/cpt: not in enabled drivers build config 00:02:15.147 common/dpaax: not in enabled drivers build config 00:02:15.147 common/iavf: not in enabled drivers build config 00:02:15.147 common/idpf: not in enabled drivers build config 00:02:15.147 common/mvep: not in enabled drivers build config 00:02:15.148 common/octeontx: not in enabled drivers build config 00:02:15.148 bus/auxiliary: not in enabled drivers build config 00:02:15.148 bus/cdx: not in enabled drivers build config 00:02:15.148 bus/dpaa: not in enabled drivers build config 00:02:15.148 bus/fslmc: not in enabled drivers build config 00:02:15.148 bus/ifpga: not in enabled drivers build config 00:02:15.148 bus/platform: not in enabled drivers build config 00:02:15.148 bus/vmbus: not in enabled drivers build config 00:02:15.148 common/cnxk: not in enabled drivers build config 00:02:15.148 common/mlx5: not in enabled drivers build config 00:02:15.148 common/nfp: not in enabled drivers build config 00:02:15.148 common/qat: not in enabled drivers build config 00:02:15.148 common/sfc_efx: not in enabled drivers build config 00:02:15.148 mempool/bucket: not in enabled drivers build config 00:02:15.148 mempool/cnxk: not in enabled drivers build config 00:02:15.148 mempool/dpaa: not in enabled drivers build config 00:02:15.148 mempool/dpaa2: not in enabled drivers build config 00:02:15.148 mempool/octeontx: not in enabled drivers build config 00:02:15.148 mempool/stack: not in enabled drivers build config 00:02:15.148 dma/cnxk: not in enabled drivers build config 00:02:15.148 dma/dpaa: not in enabled drivers build config 00:02:15.148 dma/dpaa2: not in enabled drivers build config 00:02:15.148 dma/hisilicon: not in enabled drivers build config 00:02:15.148 dma/idxd: not in enabled drivers build config 00:02:15.148 dma/ioat: not in enabled drivers build config 00:02:15.148 dma/skeleton: not in enabled drivers build config 00:02:15.148 net/af_packet: not in enabled drivers build config 00:02:15.148 net/af_xdp: not in enabled drivers build config 00:02:15.148 net/ark: not in enabled drivers build config 00:02:15.148 net/atlantic: not in enabled drivers build config 00:02:15.148 net/avp: not in enabled drivers build config 00:02:15.148 net/axgbe: not in enabled drivers build config 00:02:15.148 net/bnx2x: not in enabled drivers build config 00:02:15.148 net/bnxt: not in enabled drivers build config 00:02:15.148 net/bonding: not in enabled drivers build config 00:02:15.148 net/cnxk: not in enabled drivers build config 00:02:15.148 net/cpfl: not in enabled drivers build config 00:02:15.148 net/cxgbe: not in enabled drivers build config 00:02:15.148 net/dpaa: not in enabled drivers build config 00:02:15.148 net/dpaa2: not in enabled drivers build config 00:02:15.148 net/e1000: not in enabled drivers build config 00:02:15.148 net/ena: not in enabled drivers build config 00:02:15.148 net/enetc: not in enabled drivers build config 00:02:15.148 net/enetfec: not in enabled drivers build config 00:02:15.148 net/enic: not in enabled drivers build config 00:02:15.148 net/failsafe: not in enabled drivers build config 00:02:15.148 net/fm10k: not in enabled drivers build config 00:02:15.148 net/gve: not in enabled drivers build config 00:02:15.148 net/hinic: not in enabled drivers build config 00:02:15.148 net/hns3: not in enabled drivers build config 00:02:15.148 net/i40e: not in enabled drivers build config 00:02:15.148 net/iavf: not in enabled drivers build config 00:02:15.148 net/ice: not in enabled drivers build config 00:02:15.148 net/idpf: not in enabled drivers build config 00:02:15.148 net/igc: not in enabled drivers build config 00:02:15.148 net/ionic: not in enabled drivers build config 00:02:15.148 net/ipn3ke: not in enabled drivers build config 00:02:15.148 net/ixgbe: not in enabled drivers build config 00:02:15.148 net/mana: not in enabled drivers build config 00:02:15.148 net/memif: not in enabled drivers build config 00:02:15.148 net/mlx4: not in enabled drivers build config 00:02:15.148 net/mlx5: not in enabled drivers build config 00:02:15.148 net/mvneta: not in enabled drivers build config 00:02:15.148 net/mvpp2: not in enabled drivers build config 00:02:15.148 net/netvsc: not in enabled drivers build config 00:02:15.148 net/nfb: not in enabled drivers build config 00:02:15.148 net/nfp: not in enabled drivers build config 00:02:15.148 net/ngbe: not in enabled drivers build config 00:02:15.148 net/null: not in enabled drivers build config 00:02:15.148 net/octeontx: not in enabled drivers build config 00:02:15.148 net/octeon_ep: not in enabled drivers build config 00:02:15.148 net/pcap: not in enabled drivers build config 00:02:15.148 net/pfe: not in enabled drivers build config 00:02:15.148 net/qede: not in enabled drivers build config 00:02:15.148 net/ring: not in enabled drivers build config 00:02:15.148 net/sfc: not in enabled drivers build config 00:02:15.148 net/softnic: not in enabled drivers build config 00:02:15.148 net/tap: not in enabled drivers build config 00:02:15.148 net/thunderx: not in enabled drivers build config 00:02:15.148 net/txgbe: not in enabled drivers build config 00:02:15.148 net/vdev_netvsc: not in enabled drivers build config 00:02:15.148 net/vhost: not in enabled drivers build config 00:02:15.148 net/virtio: not in enabled drivers build config 00:02:15.148 net/vmxnet3: not in enabled drivers build config 00:02:15.148 raw/*: missing internal dependency, "rawdev" 00:02:15.148 crypto/armv8: not in enabled drivers build config 00:02:15.148 crypto/bcmfs: not in enabled drivers build config 00:02:15.148 crypto/caam_jr: not in enabled drivers build config 00:02:15.148 crypto/ccp: not in enabled drivers build config 00:02:15.148 crypto/cnxk: not in enabled drivers build config 00:02:15.148 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.148 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.148 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.148 crypto/mlx5: not in enabled drivers build config 00:02:15.148 crypto/mvsam: not in enabled drivers build config 00:02:15.148 crypto/nitrox: not in enabled drivers build config 00:02:15.148 crypto/null: not in enabled drivers build config 00:02:15.148 crypto/octeontx: not in enabled drivers build config 00:02:15.148 crypto/openssl: not in enabled drivers build config 00:02:15.148 crypto/scheduler: not in enabled drivers build config 00:02:15.148 crypto/uadk: not in enabled drivers build config 00:02:15.148 crypto/virtio: not in enabled drivers build config 00:02:15.148 compress/isal: not in enabled drivers build config 00:02:15.148 compress/mlx5: not in enabled drivers build config 00:02:15.148 compress/octeontx: not in enabled drivers build config 00:02:15.148 compress/zlib: not in enabled drivers build config 00:02:15.148 regex/*: missing internal dependency, "regexdev" 00:02:15.148 ml/*: missing internal dependency, "mldev" 00:02:15.148 vdpa/ifc: not in enabled drivers build config 00:02:15.148 vdpa/mlx5: not in enabled drivers build config 00:02:15.148 vdpa/nfp: not in enabled drivers build config 00:02:15.148 vdpa/sfc: not in enabled drivers build config 00:02:15.148 event/*: missing internal dependency, "eventdev" 00:02:15.148 baseband/*: missing internal dependency, "bbdev" 00:02:15.148 gpu/*: missing internal dependency, "gpudev" 00:02:15.148 00:02:15.148 00:02:15.148 Build targets in project: 84 00:02:15.148 00:02:15.148 DPDK 23.11.0 00:02:15.148 00:02:15.148 User defined options 00:02:15.148 buildtype : debug 00:02:15.148 default_library : shared 00:02:15.148 libdir : lib 00:02:15.148 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:15.148 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:15.148 c_link_args : 00:02:15.148 cpu_instruction_set: native 00:02:15.148 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:15.148 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:15.148 enable_docs : false 00:02:15.148 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:15.148 enable_kmods : false 00:02:15.148 tests : false 00:02:15.148 00:02:15.148 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.148 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:15.148 [1/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.148 [2/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.148 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.148 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.148 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.148 [6/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.148 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.148 [8/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.148 [9/264] Linking static target lib/librte_kvargs.a 00:02:15.148 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.148 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.148 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.148 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.148 [14/264] Linking static target lib/librte_log.a 00:02:15.148 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.148 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.148 [17/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.148 [18/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.148 [19/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.148 [20/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.148 [21/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.148 [22/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.148 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.148 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.148 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.148 [26/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.148 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.148 [28/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.407 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.407 [30/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.407 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.407 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.407 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.407 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:15.407 [35/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.407 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.407 [37/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.407 [38/264] Linking static target lib/librte_pci.a 00:02:15.407 [39/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.408 [40/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.408 [41/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.408 [42/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.408 [43/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.408 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.408 [45/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.408 [46/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.408 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.667 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.667 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.667 [50/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.667 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.667 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.667 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.667 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.667 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.667 [56/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.667 [57/264] Linking static target lib/librte_ring.a 00:02:15.667 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.667 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.667 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.667 [61/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.667 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.667 [63/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.667 [64/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.667 [65/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.667 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.667 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.667 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.667 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.667 [70/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.667 [71/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.667 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.667 [73/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:15.667 [74/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.667 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.667 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.667 [77/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.667 [78/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.668 [79/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.668 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.668 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.668 [82/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.668 [83/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.668 [84/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.668 [85/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.668 [86/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.668 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.668 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.668 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.668 [90/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.668 [91/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.668 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.668 [93/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.668 [94/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.668 [95/264] Linking static target lib/librte_telemetry.a 00:02:15.668 [96/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.668 [97/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:15.668 [98/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:15.668 [99/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.668 [100/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.668 [101/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.668 [102/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:15.668 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.668 [104/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.668 [105/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:15.668 [106/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.668 [107/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:15.668 [108/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.668 [109/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.668 [110/264] Linking static target lib/librte_timer.a 00:02:15.668 [111/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.668 [112/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.668 [113/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.668 [114/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:15.668 [115/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.668 [116/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.668 [117/264] Linking static target lib/librte_dmadev.a 00:02:15.668 [118/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:15.668 [119/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.668 [120/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.668 [121/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.668 [122/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.668 [123/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.668 [124/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:15.668 [125/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.668 [126/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.668 [127/264] Linking static target lib/librte_meter.a 00:02:15.668 [128/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.668 [129/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.668 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.668 [131/264] Linking static target lib/librte_net.a 00:02:15.668 [132/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.668 [133/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.668 [134/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.668 [135/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.929 [136/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.929 [137/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:15.929 [138/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:15.929 [139/264] Linking target lib/librte_log.so.24.0 00:02:15.929 [140/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:15.929 [141/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.929 [142/264] Linking static target lib/librte_power.a 00:02:15.929 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.929 [144/264] Linking static target lib/librte_compressdev.a 00:02:15.929 [145/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.929 [146/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.929 [147/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.929 [148/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:15.929 [149/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.929 [150/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.929 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:15.929 [152/264] Linking static target lib/librte_cmdline.a 00:02:15.929 [153/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.929 [154/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.929 [155/264] Linking static target lib/librte_rcu.a 00:02:15.929 [156/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:15.929 [157/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.929 [158/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:15.929 [159/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:15.929 [160/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.929 [161/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.929 [162/264] Linking static target lib/librte_security.a 00:02:15.929 [163/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:15.929 [164/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.929 [165/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.929 [166/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.929 [167/264] Linking static target lib/librte_eal.a 00:02:15.929 [168/264] Linking static target lib/librte_reorder.a 00:02:15.929 [169/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.929 [170/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:15.929 [171/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.930 [172/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.930 [173/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.930 [174/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.930 [175/264] Linking static target lib/librte_mempool.a 00:02:15.930 [176/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.930 [177/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.930 [178/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.930 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.930 [180/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.930 [181/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.930 [182/264] Linking target lib/librte_kvargs.so.24.0 00:02:15.930 [183/264] Linking static target drivers/librte_bus_vdev.a 00:02:15.930 [184/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.930 [185/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.930 [186/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.930 [187/264] Linking static target lib/librte_mbuf.a 00:02:15.930 [188/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.930 [189/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:16.191 [190/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.191 [191/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.191 [192/264] Linking static target lib/librte_hash.a 00:02:16.191 [193/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.191 [194/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.191 [195/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:16.191 [196/264] Linking static target drivers/librte_bus_pci.a 00:02:16.191 [197/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.191 [198/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.191 [199/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.191 [200/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.191 [201/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.191 [202/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.191 [203/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.191 [204/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:16.191 [205/264] Linking static target drivers/librte_mempool_ring.a 00:02:16.191 [206/264] Linking static target lib/librte_cryptodev.a 00:02:16.191 [207/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.191 [208/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.452 [209/264] Linking target lib/librte_telemetry.so.24.0 00:02:16.452 [210/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.452 [211/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.452 [212/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:16.452 [213/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.452 [214/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.715 [215/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.715 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.715 [217/264] Linking static target lib/librte_ethdev.a 00:02:16.715 [218/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.976 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.976 [220/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.976 [221/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.976 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.237 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.498 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.759 [225/264] Linking static target lib/librte_vhost.a 00:02:18.330 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.723 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.313 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.698 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.698 [230/264] Linking target lib/librte_eal.so.24.0 00:02:27.698 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:27.698 [232/264] Linking target lib/librte_ring.so.24.0 00:02:27.698 [233/264] Linking target lib/librte_pci.so.24.0 00:02:27.698 [234/264] Linking target lib/librte_meter.so.24.0 00:02:27.698 [235/264] Linking target lib/librte_timer.so.24.0 00:02:27.698 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:27.698 [237/264] Linking target lib/librte_dmadev.so.24.0 00:02:27.698 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:27.698 [239/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:27.698 [240/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:27.698 [241/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:27.698 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:27.959 [243/264] Linking target lib/librte_rcu.so.24.0 00:02:27.959 [244/264] Linking target lib/librte_mempool.so.24.0 00:02:27.959 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:27.959 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:27.959 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:27.959 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:27.959 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:28.219 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:28.219 [251/264] Linking target lib/librte_reorder.so.24.0 00:02:28.219 [252/264] Linking target lib/librte_net.so.24.0 00:02:28.219 [253/264] Linking target lib/librte_compressdev.so.24.0 00:02:28.219 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:28.219 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:28.480 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:28.480 [257/264] Linking target lib/librte_cmdline.so.24.0 00:02:28.480 [258/264] Linking target lib/librte_hash.so.24.0 00:02:28.480 [259/264] Linking target lib/librte_ethdev.so.24.0 00:02:28.480 [260/264] Linking target lib/librte_security.so.24.0 00:02:28.480 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:28.480 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:28.740 [263/264] Linking target lib/librte_power.so.24.0 00:02:28.740 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:28.740 INFO: autodetecting backend as ninja 00:02:28.740 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:29.682 CC lib/log/log.o 00:02:29.682 CC lib/log/log_flags.o 00:02:29.682 CC lib/log/log_deprecated.o 00:02:29.682 CC lib/ut_mock/mock.o 00:02:29.682 CC lib/ut/ut.o 00:02:29.682 LIB libspdk_log.a 00:02:29.682 LIB libspdk_ut_mock.a 00:02:29.682 LIB libspdk_ut.a 00:02:29.682 SO libspdk_ut_mock.so.5.0 00:02:29.682 SO libspdk_log.so.6.1 00:02:29.682 SO libspdk_ut.so.1.0 00:02:29.682 SYMLINK libspdk_ut_mock.so 00:02:29.682 SYMLINK libspdk_log.so 00:02:29.682 SYMLINK libspdk_ut.so 00:02:29.943 CXX lib/trace_parser/trace.o 00:02:29.943 CC lib/util/base64.o 00:02:29.943 CC lib/util/bit_array.o 00:02:29.943 CC lib/util/cpuset.o 00:02:29.943 CC lib/util/crc16.o 00:02:29.943 CC lib/util/crc32.o 00:02:29.943 CC lib/util/crc32c.o 00:02:29.943 CC lib/util/crc32_ieee.o 00:02:29.943 CC lib/util/crc64.o 00:02:29.943 CC lib/util/dif.o 00:02:29.943 CC lib/dma/dma.o 00:02:29.943 CC lib/ioat/ioat.o 00:02:29.943 CC lib/util/fd.o 00:02:29.943 CC lib/util/file.o 00:02:29.943 CC lib/util/hexlify.o 00:02:29.943 CC lib/util/iov.o 00:02:29.943 CC lib/util/math.o 00:02:29.943 CC lib/util/pipe.o 00:02:29.943 CC lib/util/strerror_tls.o 00:02:29.943 CC lib/util/string.o 00:02:29.943 CC lib/util/uuid.o 00:02:29.943 CC lib/util/fd_group.o 00:02:29.943 CC lib/util/xor.o 00:02:29.943 CC lib/util/zipf.o 00:02:30.204 CC lib/vfio_user/host/vfio_user_pci.o 00:02:30.204 CC lib/vfio_user/host/vfio_user.o 00:02:30.204 LIB libspdk_dma.a 00:02:30.204 SO libspdk_dma.so.3.0 00:02:30.204 SYMLINK libspdk_dma.so 00:02:30.204 LIB libspdk_ioat.a 00:02:30.466 SO libspdk_ioat.so.6.0 00:02:30.466 LIB libspdk_vfio_user.a 00:02:30.466 SYMLINK libspdk_ioat.so 00:02:30.466 SO libspdk_vfio_user.so.4.0 00:02:30.466 SYMLINK libspdk_vfio_user.so 00:02:30.466 LIB libspdk_util.a 00:02:30.466 SO libspdk_util.so.8.0 00:02:30.728 SYMLINK libspdk_util.so 00:02:30.728 LIB libspdk_trace_parser.a 00:02:30.728 SO libspdk_trace_parser.so.4.0 00:02:30.988 CC lib/conf/conf.o 00:02:30.988 CC lib/rdma/common.o 00:02:30.988 CC lib/rdma/rdma_verbs.o 00:02:30.988 CC lib/json/json_parse.o 00:02:30.988 CC lib/json/json_util.o 00:02:30.988 SYMLINK libspdk_trace_parser.so 00:02:30.988 CC lib/json/json_write.o 00:02:30.988 CC lib/vmd/vmd.o 00:02:30.988 CC lib/vmd/led.o 00:02:30.988 CC lib/idxd/idxd.o 00:02:30.988 CC lib/idxd/idxd_user.o 00:02:30.988 CC lib/env_dpdk/env.o 00:02:30.988 CC lib/idxd/idxd_kernel.o 00:02:30.988 CC lib/env_dpdk/memory.o 00:02:30.988 CC lib/env_dpdk/pci.o 00:02:30.988 CC lib/env_dpdk/threads.o 00:02:30.988 CC lib/env_dpdk/init.o 00:02:30.988 CC lib/env_dpdk/pci_ioat.o 00:02:30.988 CC lib/env_dpdk/pci_virtio.o 00:02:30.988 CC lib/env_dpdk/pci_vmd.o 00:02:30.988 CC lib/env_dpdk/pci_idxd.o 00:02:30.988 CC lib/env_dpdk/pci_event.o 00:02:30.988 CC lib/env_dpdk/sigbus_handler.o 00:02:30.988 CC lib/env_dpdk/pci_dpdk.o 00:02:30.988 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:30.988 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:31.250 LIB libspdk_conf.a 00:02:31.250 SO libspdk_conf.so.5.0 00:02:31.250 LIB libspdk_rdma.a 00:02:31.250 LIB libspdk_json.a 00:02:31.250 SYMLINK libspdk_conf.so 00:02:31.250 SO libspdk_rdma.so.5.0 00:02:31.250 SO libspdk_json.so.5.1 00:02:31.250 SYMLINK libspdk_rdma.so 00:02:31.250 SYMLINK libspdk_json.so 00:02:31.537 LIB libspdk_idxd.a 00:02:31.537 SO libspdk_idxd.so.11.0 00:02:31.537 LIB libspdk_vmd.a 00:02:31.537 SYMLINK libspdk_idxd.so 00:02:31.537 SO libspdk_vmd.so.5.0 00:02:31.537 CC lib/jsonrpc/jsonrpc_server.o 00:02:31.537 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:31.537 CC lib/jsonrpc/jsonrpc_client.o 00:02:31.537 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:31.537 SYMLINK libspdk_vmd.so 00:02:31.861 LIB libspdk_jsonrpc.a 00:02:31.861 SO libspdk_jsonrpc.so.5.1 00:02:31.861 SYMLINK libspdk_jsonrpc.so 00:02:32.122 LIB libspdk_env_dpdk.a 00:02:32.122 CC lib/rpc/rpc.o 00:02:32.122 SO libspdk_env_dpdk.so.13.0 00:02:32.383 SYMLINK libspdk_env_dpdk.so 00:02:32.383 LIB libspdk_rpc.a 00:02:32.383 SO libspdk_rpc.so.5.0 00:02:32.383 SYMLINK libspdk_rpc.so 00:02:32.644 CC lib/trace/trace.o 00:02:32.644 CC lib/trace/trace_flags.o 00:02:32.644 CC lib/trace/trace_rpc.o 00:02:32.644 CC lib/notify/notify.o 00:02:32.644 CC lib/notify/notify_rpc.o 00:02:32.644 CC lib/sock/sock.o 00:02:32.644 CC lib/sock/sock_rpc.o 00:02:32.905 LIB libspdk_notify.a 00:02:32.905 SO libspdk_notify.so.5.0 00:02:32.905 LIB libspdk_trace.a 00:02:32.905 SO libspdk_trace.so.9.0 00:02:32.905 SYMLINK libspdk_notify.so 00:02:33.167 SYMLINK libspdk_trace.so 00:02:33.167 LIB libspdk_sock.a 00:02:33.167 SO libspdk_sock.so.8.0 00:02:33.167 SYMLINK libspdk_sock.so 00:02:33.167 CC lib/thread/thread.o 00:02:33.167 CC lib/thread/iobuf.o 00:02:33.428 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:33.428 CC lib/nvme/nvme_ctrlr.o 00:02:33.428 CC lib/nvme/nvme_fabric.o 00:02:33.428 CC lib/nvme/nvme_ns_cmd.o 00:02:33.428 CC lib/nvme/nvme_ns.o 00:02:33.428 CC lib/nvme/nvme_pcie_common.o 00:02:33.428 CC lib/nvme/nvme_pcie.o 00:02:33.428 CC lib/nvme/nvme_qpair.o 00:02:33.428 CC lib/nvme/nvme.o 00:02:33.428 CC lib/nvme/nvme_quirks.o 00:02:33.428 CC lib/nvme/nvme_transport.o 00:02:33.428 CC lib/nvme/nvme_discovery.o 00:02:33.428 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:33.428 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:33.428 CC lib/nvme/nvme_tcp.o 00:02:33.428 CC lib/nvme/nvme_opal.o 00:02:33.428 CC lib/nvme/nvme_io_msg.o 00:02:33.428 CC lib/nvme/nvme_poll_group.o 00:02:33.428 CC lib/nvme/nvme_zns.o 00:02:33.428 CC lib/nvme/nvme_cuse.o 00:02:33.428 CC lib/nvme/nvme_vfio_user.o 00:02:33.428 CC lib/nvme/nvme_rdma.o 00:02:34.815 LIB libspdk_thread.a 00:02:34.815 SO libspdk_thread.so.9.0 00:02:34.815 SYMLINK libspdk_thread.so 00:02:34.815 CC lib/virtio/virtio.o 00:02:34.815 CC lib/virtio/virtio_vhost_user.o 00:02:34.815 CC lib/virtio/virtio_vfio_user.o 00:02:34.815 CC lib/virtio/virtio_pci.o 00:02:34.815 CC lib/accel/accel.o 00:02:34.815 CC lib/accel/accel_rpc.o 00:02:35.077 CC lib/accel/accel_sw.o 00:02:35.077 CC lib/init/json_config.o 00:02:35.077 CC lib/blob/blobstore.o 00:02:35.077 CC lib/init/subsystem.o 00:02:35.077 CC lib/blob/request.o 00:02:35.077 CC lib/init/subsystem_rpc.o 00:02:35.077 CC lib/blob/zeroes.o 00:02:35.077 CC lib/blob/blob_bs_dev.o 00:02:35.077 CC lib/init/rpc.o 00:02:35.077 LIB libspdk_init.a 00:02:35.338 SO libspdk_init.so.4.0 00:02:35.338 LIB libspdk_virtio.a 00:02:35.338 LIB libspdk_nvme.a 00:02:35.338 SO libspdk_virtio.so.6.0 00:02:35.338 SYMLINK libspdk_init.so 00:02:35.338 SYMLINK libspdk_virtio.so 00:02:35.338 SO libspdk_nvme.so.12.0 00:02:35.599 CC lib/event/app.o 00:02:35.599 CC lib/event/reactor.o 00:02:35.599 CC lib/event/log_rpc.o 00:02:35.599 CC lib/event/app_rpc.o 00:02:35.599 CC lib/event/scheduler_static.o 00:02:35.599 SYMLINK libspdk_nvme.so 00:02:35.861 LIB libspdk_accel.a 00:02:35.861 SO libspdk_accel.so.14.0 00:02:35.861 LIB libspdk_event.a 00:02:35.861 SYMLINK libspdk_accel.so 00:02:35.861 SO libspdk_event.so.12.0 00:02:36.122 SYMLINK libspdk_event.so 00:02:36.122 CC lib/bdev/bdev.o 00:02:36.122 CC lib/bdev/bdev_rpc.o 00:02:36.122 CC lib/bdev/part.o 00:02:36.122 CC lib/bdev/bdev_zone.o 00:02:36.122 CC lib/bdev/scsi_nvme.o 00:02:37.509 LIB libspdk_blob.a 00:02:37.509 SO libspdk_blob.so.10.1 00:02:37.509 SYMLINK libspdk_blob.so 00:02:37.509 CC lib/blobfs/blobfs.o 00:02:37.509 CC lib/blobfs/tree.o 00:02:37.509 CC lib/lvol/lvol.o 00:02:38.081 LIB libspdk_blobfs.a 00:02:38.081 SO libspdk_blobfs.so.9.0 00:02:38.081 SYMLINK libspdk_blobfs.so 00:02:38.342 LIB libspdk_lvol.a 00:02:38.342 LIB libspdk_bdev.a 00:02:38.342 SO libspdk_lvol.so.9.1 00:02:38.342 SO libspdk_bdev.so.14.0 00:02:38.342 SYMLINK libspdk_lvol.so 00:02:38.603 SYMLINK libspdk_bdev.so 00:02:38.603 CC lib/nbd/nbd.o 00:02:38.603 CC lib/nbd/nbd_rpc.o 00:02:38.603 CC lib/nvmf/ctrlr.o 00:02:38.603 CC lib/nvmf/ctrlr_bdev.o 00:02:38.603 CC lib/nvmf/ctrlr_discovery.o 00:02:38.603 CC lib/ftl/ftl_core.o 00:02:38.603 CC lib/ftl/ftl_init.o 00:02:38.603 CC lib/nvmf/subsystem.o 00:02:38.603 CC lib/ftl/ftl_layout.o 00:02:38.603 CC lib/nvmf/nvmf.o 00:02:38.603 CC lib/ftl/ftl_debug.o 00:02:38.603 CC lib/nvmf/nvmf_rpc.o 00:02:38.603 CC lib/ftl/ftl_io.o 00:02:38.603 CC lib/ftl/ftl_sb.o 00:02:38.603 CC lib/nvmf/transport.o 00:02:38.603 CC lib/ftl/ftl_l2p.o 00:02:38.603 CC lib/nvmf/tcp.o 00:02:38.603 CC lib/ftl/ftl_l2p_flat.o 00:02:38.603 CC lib/nvmf/rdma.o 00:02:38.603 CC lib/ftl/ftl_nv_cache.o 00:02:38.603 CC lib/ftl/ftl_band.o 00:02:38.603 CC lib/ftl/ftl_band_ops.o 00:02:38.603 CC lib/ftl/ftl_writer.o 00:02:38.603 CC lib/ftl/ftl_rq.o 00:02:38.603 CC lib/ublk/ublk.o 00:02:38.603 CC lib/scsi/dev.o 00:02:38.603 CC lib/ftl/ftl_reloc.o 00:02:38.603 CC lib/ublk/ublk_rpc.o 00:02:38.603 CC lib/scsi/lun.o 00:02:38.603 CC lib/ftl/ftl_l2p_cache.o 00:02:38.603 CC lib/scsi/port.o 00:02:38.603 CC lib/ftl/ftl_p2l.o 00:02:38.603 CC lib/scsi/scsi.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt.o 00:02:38.603 CC lib/scsi/scsi_bdev.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:38.603 CC lib/scsi/scsi_pr.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:38.603 CC lib/scsi/scsi_rpc.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:38.603 CC lib/scsi/task.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:38.603 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:38.861 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:38.861 CC lib/ftl/utils/ftl_conf.o 00:02:38.861 CC lib/ftl/utils/ftl_md.o 00:02:38.861 CC lib/ftl/utils/ftl_mempool.o 00:02:38.861 CC lib/ftl/utils/ftl_bitmap.o 00:02:38.861 CC lib/ftl/utils/ftl_property.o 00:02:38.861 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:38.861 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:38.861 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:38.861 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:38.861 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:38.861 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:38.861 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:38.861 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:38.861 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:38.861 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:38.861 CC lib/ftl/base/ftl_base_dev.o 00:02:38.861 CC lib/ftl/base/ftl_base_bdev.o 00:02:38.861 CC lib/ftl/ftl_trace.o 00:02:39.120 LIB libspdk_nbd.a 00:02:39.120 SO libspdk_nbd.so.6.0 00:02:39.120 LIB libspdk_scsi.a 00:02:39.382 SO libspdk_scsi.so.8.0 00:02:39.382 SYMLINK libspdk_nbd.so 00:02:39.382 LIB libspdk_ublk.a 00:02:39.382 SO libspdk_ublk.so.2.0 00:02:39.382 SYMLINK libspdk_scsi.so 00:02:39.382 SYMLINK libspdk_ublk.so 00:02:39.643 LIB libspdk_ftl.a 00:02:39.643 CC lib/vhost/vhost.o 00:02:39.643 CC lib/vhost/vhost_rpc.o 00:02:39.643 CC lib/vhost/rte_vhost_user.o 00:02:39.643 CC lib/vhost/vhost_scsi.o 00:02:39.643 CC lib/vhost/vhost_blk.o 00:02:39.643 CC lib/iscsi/conn.o 00:02:39.643 CC lib/iscsi/init_grp.o 00:02:39.643 CC lib/iscsi/iscsi.o 00:02:39.643 CC lib/iscsi/md5.o 00:02:39.643 CC lib/iscsi/param.o 00:02:39.643 CC lib/iscsi/portal_grp.o 00:02:39.643 CC lib/iscsi/tgt_node.o 00:02:39.643 CC lib/iscsi/iscsi_subsystem.o 00:02:39.643 CC lib/iscsi/iscsi_rpc.o 00:02:39.643 CC lib/iscsi/task.o 00:02:39.643 SO libspdk_ftl.so.8.0 00:02:39.904 SYMLINK libspdk_ftl.so 00:02:40.476 LIB libspdk_nvmf.a 00:02:40.476 LIB libspdk_vhost.a 00:02:40.476 SO libspdk_nvmf.so.17.0 00:02:40.476 SO libspdk_vhost.so.7.1 00:02:40.738 SYMLINK libspdk_vhost.so 00:02:40.738 SYMLINK libspdk_nvmf.so 00:02:40.738 LIB libspdk_iscsi.a 00:02:40.738 SO libspdk_iscsi.so.7.0 00:02:40.999 SYMLINK libspdk_iscsi.so 00:02:41.261 CC module/env_dpdk/env_dpdk_rpc.o 00:02:41.522 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:41.522 CC module/blob/bdev/blob_bdev.o 00:02:41.522 CC module/scheduler/gscheduler/gscheduler.o 00:02:41.522 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:41.522 CC module/accel/dsa/accel_dsa.o 00:02:41.522 CC module/sock/posix/posix.o 00:02:41.522 CC module/accel/dsa/accel_dsa_rpc.o 00:02:41.522 CC module/accel/ioat/accel_ioat.o 00:02:41.522 CC module/accel/iaa/accel_iaa.o 00:02:41.522 CC module/accel/ioat/accel_ioat_rpc.o 00:02:41.522 CC module/accel/error/accel_error.o 00:02:41.522 CC module/accel/iaa/accel_iaa_rpc.o 00:02:41.522 CC module/accel/error/accel_error_rpc.o 00:02:41.522 LIB libspdk_env_dpdk_rpc.a 00:02:41.522 SO libspdk_env_dpdk_rpc.so.5.0 00:02:41.522 SYMLINK libspdk_env_dpdk_rpc.so 00:02:41.522 LIB libspdk_scheduler_gscheduler.a 00:02:41.522 LIB libspdk_scheduler_dpdk_governor.a 00:02:41.522 LIB libspdk_scheduler_dynamic.a 00:02:41.522 SO libspdk_scheduler_gscheduler.so.3.0 00:02:41.522 LIB libspdk_accel_error.a 00:02:41.784 SO libspdk_scheduler_dynamic.so.3.0 00:02:41.784 LIB libspdk_accel_ioat.a 00:02:41.784 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:41.784 LIB libspdk_accel_iaa.a 00:02:41.784 LIB libspdk_accel_dsa.a 00:02:41.784 SO libspdk_accel_error.so.1.0 00:02:41.784 SYMLINK libspdk_scheduler_dynamic.so 00:02:41.784 SO libspdk_accel_ioat.so.5.0 00:02:41.784 SYMLINK libspdk_scheduler_gscheduler.so 00:02:41.784 SO libspdk_accel_iaa.so.2.0 00:02:41.784 LIB libspdk_blob_bdev.a 00:02:41.784 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:41.784 SO libspdk_accel_dsa.so.4.0 00:02:41.784 SO libspdk_blob_bdev.so.10.1 00:02:41.784 SYMLINK libspdk_accel_error.so 00:02:41.784 SYMLINK libspdk_accel_ioat.so 00:02:41.784 SYMLINK libspdk_accel_iaa.so 00:02:41.784 SYMLINK libspdk_accel_dsa.so 00:02:41.784 SYMLINK libspdk_blob_bdev.so 00:02:42.045 LIB libspdk_sock_posix.a 00:02:42.045 SO libspdk_sock_posix.so.5.0 00:02:42.045 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:42.045 CC module/blobfs/bdev/blobfs_bdev.o 00:02:42.309 CC module/bdev/gpt/gpt.o 00:02:42.309 CC module/bdev/gpt/vbdev_gpt.o 00:02:42.309 CC module/bdev/delay/vbdev_delay.o 00:02:42.309 CC module/bdev/null/bdev_null.o 00:02:42.309 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:42.309 CC module/bdev/null/bdev_null_rpc.o 00:02:42.309 CC module/bdev/error/vbdev_error.o 00:02:42.309 CC module/bdev/malloc/bdev_malloc.o 00:02:42.309 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:42.309 CC module/bdev/error/vbdev_error_rpc.o 00:02:42.309 CC module/bdev/lvol/vbdev_lvol.o 00:02:42.309 CC module/bdev/split/vbdev_split.o 00:02:42.310 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:42.310 CC module/bdev/split/vbdev_split_rpc.o 00:02:42.310 CC module/bdev/raid/bdev_raid_rpc.o 00:02:42.310 CC module/bdev/raid/bdev_raid.o 00:02:42.310 CC module/bdev/iscsi/bdev_iscsi.o 00:02:42.310 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:42.310 CC module/bdev/ftl/bdev_ftl.o 00:02:42.310 CC module/bdev/aio/bdev_aio_rpc.o 00:02:42.310 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:42.310 CC module/bdev/raid/bdev_raid_sb.o 00:02:42.310 CC module/bdev/passthru/vbdev_passthru.o 00:02:42.310 CC module/bdev/aio/bdev_aio.o 00:02:42.310 CC module/bdev/raid/raid0.o 00:02:42.310 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:42.310 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:42.310 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:42.310 CC module/bdev/nvme/bdev_nvme.o 00:02:42.310 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:42.310 CC module/bdev/raid/raid1.o 00:02:42.310 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:42.310 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:42.310 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:42.310 CC module/bdev/raid/concat.o 00:02:42.310 CC module/bdev/nvme/nvme_rpc.o 00:02:42.310 CC module/bdev/nvme/bdev_mdns_client.o 00:02:42.310 CC module/bdev/nvme/vbdev_opal.o 00:02:42.310 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:42.310 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:42.310 SYMLINK libspdk_sock_posix.so 00:02:42.310 LIB libspdk_blobfs_bdev.a 00:02:42.310 SO libspdk_blobfs_bdev.so.5.0 00:02:42.571 LIB libspdk_bdev_error.a 00:02:42.571 LIB libspdk_bdev_gpt.a 00:02:42.571 LIB libspdk_bdev_null.a 00:02:42.571 LIB libspdk_bdev_split.a 00:02:42.571 SO libspdk_bdev_gpt.so.5.0 00:02:42.571 SYMLINK libspdk_blobfs_bdev.so 00:02:42.571 SO libspdk_bdev_error.so.5.0 00:02:42.571 LIB libspdk_bdev_ftl.a 00:02:42.571 SO libspdk_bdev_split.so.5.0 00:02:42.571 SO libspdk_bdev_null.so.5.0 00:02:42.571 LIB libspdk_bdev_passthru.a 00:02:42.571 SO libspdk_bdev_ftl.so.5.0 00:02:42.571 SYMLINK libspdk_bdev_error.so 00:02:42.571 LIB libspdk_bdev_malloc.a 00:02:42.571 LIB libspdk_bdev_aio.a 00:02:42.571 LIB libspdk_bdev_zone_block.a 00:02:42.571 SYMLINK libspdk_bdev_gpt.so 00:02:42.571 LIB libspdk_bdev_delay.a 00:02:42.571 SO libspdk_bdev_passthru.so.5.0 00:02:42.571 SO libspdk_bdev_aio.so.5.0 00:02:42.571 SO libspdk_bdev_malloc.so.5.0 00:02:42.571 SYMLINK libspdk_bdev_null.so 00:02:42.571 SYMLINK libspdk_bdev_split.so 00:02:42.571 LIB libspdk_bdev_iscsi.a 00:02:42.571 SO libspdk_bdev_zone_block.so.5.0 00:02:42.571 SO libspdk_bdev_delay.so.5.0 00:02:42.571 SYMLINK libspdk_bdev_ftl.so 00:02:42.571 SO libspdk_bdev_iscsi.so.5.0 00:02:42.571 SYMLINK libspdk_bdev_passthru.so 00:02:42.571 SYMLINK libspdk_bdev_malloc.so 00:02:42.571 LIB libspdk_bdev_lvol.a 00:02:42.571 SYMLINK libspdk_bdev_aio.so 00:02:42.571 SYMLINK libspdk_bdev_zone_block.so 00:02:42.571 SYMLINK libspdk_bdev_delay.so 00:02:42.571 SO libspdk_bdev_lvol.so.5.0 00:02:42.571 SYMLINK libspdk_bdev_iscsi.so 00:02:42.833 LIB libspdk_bdev_virtio.a 00:02:42.833 SO libspdk_bdev_virtio.so.5.0 00:02:42.833 SYMLINK libspdk_bdev_lvol.so 00:02:42.833 SYMLINK libspdk_bdev_virtio.so 00:02:43.094 LIB libspdk_bdev_raid.a 00:02:43.094 SO libspdk_bdev_raid.so.5.0 00:02:43.094 SYMLINK libspdk_bdev_raid.so 00:02:44.037 LIB libspdk_bdev_nvme.a 00:02:44.037 SO libspdk_bdev_nvme.so.6.0 00:02:44.298 SYMLINK libspdk_bdev_nvme.so 00:02:44.559 CC module/event/subsystems/sock/sock.o 00:02:44.559 CC module/event/subsystems/vmd/vmd.o 00:02:44.559 CC module/event/subsystems/iobuf/iobuf.o 00:02:44.559 CC module/event/subsystems/scheduler/scheduler.o 00:02:44.559 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:44.559 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:44.559 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:44.821 LIB libspdk_event_sock.a 00:02:44.821 SO libspdk_event_sock.so.4.0 00:02:44.821 LIB libspdk_event_scheduler.a 00:02:44.821 LIB libspdk_event_vhost_blk.a 00:02:44.821 LIB libspdk_event_vmd.a 00:02:44.821 LIB libspdk_event_iobuf.a 00:02:44.821 SO libspdk_event_scheduler.so.3.0 00:02:44.821 SO libspdk_event_vmd.so.5.0 00:02:44.821 SO libspdk_event_vhost_blk.so.2.0 00:02:44.821 SO libspdk_event_iobuf.so.2.0 00:02:44.821 SYMLINK libspdk_event_sock.so 00:02:45.082 SYMLINK libspdk_event_scheduler.so 00:02:45.082 SYMLINK libspdk_event_vhost_blk.so 00:02:45.082 SYMLINK libspdk_event_vmd.so 00:02:45.082 SYMLINK libspdk_event_iobuf.so 00:02:45.082 CC module/event/subsystems/accel/accel.o 00:02:45.343 LIB libspdk_event_accel.a 00:02:45.343 SO libspdk_event_accel.so.5.0 00:02:45.343 SYMLINK libspdk_event_accel.so 00:02:45.604 CC module/event/subsystems/bdev/bdev.o 00:02:45.865 LIB libspdk_event_bdev.a 00:02:45.865 SO libspdk_event_bdev.so.5.0 00:02:45.865 SYMLINK libspdk_event_bdev.so 00:02:46.127 CC module/event/subsystems/ublk/ublk.o 00:02:46.127 CC module/event/subsystems/scsi/scsi.o 00:02:46.127 CC module/event/subsystems/nbd/nbd.o 00:02:46.127 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:46.127 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:46.388 LIB libspdk_event_ublk.a 00:02:46.388 LIB libspdk_event_nbd.a 00:02:46.388 LIB libspdk_event_scsi.a 00:02:46.388 SO libspdk_event_ublk.so.2.0 00:02:46.388 SO libspdk_event_nbd.so.5.0 00:02:46.388 SO libspdk_event_scsi.so.5.0 00:02:46.388 LIB libspdk_event_nvmf.a 00:02:46.388 SYMLINK libspdk_event_ublk.so 00:02:46.388 SYMLINK libspdk_event_nbd.so 00:02:46.388 SYMLINK libspdk_event_scsi.so 00:02:46.388 SO libspdk_event_nvmf.so.5.0 00:02:46.388 SYMLINK libspdk_event_nvmf.so 00:02:46.650 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:46.650 CC module/event/subsystems/iscsi/iscsi.o 00:02:46.911 LIB libspdk_event_vhost_scsi.a 00:02:46.911 LIB libspdk_event_iscsi.a 00:02:46.911 SO libspdk_event_vhost_scsi.so.2.0 00:02:46.911 SO libspdk_event_iscsi.so.5.0 00:02:46.911 SYMLINK libspdk_event_vhost_scsi.so 00:02:46.911 SYMLINK libspdk_event_iscsi.so 00:02:47.172 SO libspdk.so.5.0 00:02:47.172 SYMLINK libspdk.so 00:02:47.432 CC app/spdk_nvme_perf/perf.o 00:02:47.432 CC app/spdk_nvme_identify/identify.o 00:02:47.432 CXX app/trace/trace.o 00:02:47.432 CC app/trace_record/trace_record.o 00:02:47.432 CC app/spdk_nvme_discover/discovery_aer.o 00:02:47.432 CC test/rpc_client/rpc_client_test.o 00:02:47.432 CC app/spdk_lspci/spdk_lspci.o 00:02:47.432 CC app/spdk_top/spdk_top.o 00:02:47.432 TEST_HEADER include/spdk/accel.h 00:02:47.432 TEST_HEADER include/spdk/accel_module.h 00:02:47.432 TEST_HEADER include/spdk/assert.h 00:02:47.432 TEST_HEADER include/spdk/base64.h 00:02:47.432 TEST_HEADER include/spdk/barrier.h 00:02:47.432 TEST_HEADER include/spdk/bdev.h 00:02:47.432 TEST_HEADER include/spdk/bdev_module.h 00:02:47.432 TEST_HEADER include/spdk/bdev_zone.h 00:02:47.432 TEST_HEADER include/spdk/bit_array.h 00:02:47.432 TEST_HEADER include/spdk/bit_pool.h 00:02:47.432 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:47.432 TEST_HEADER include/spdk/blobfs.h 00:02:47.432 TEST_HEADER include/spdk/blob_bdev.h 00:02:47.432 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:47.432 CC app/spdk_dd/spdk_dd.o 00:02:47.432 TEST_HEADER include/spdk/blob.h 00:02:47.432 TEST_HEADER include/spdk/config.h 00:02:47.432 TEST_HEADER include/spdk/conf.h 00:02:47.432 TEST_HEADER include/spdk/cpuset.h 00:02:47.432 CC app/iscsi_tgt/iscsi_tgt.o 00:02:47.432 TEST_HEADER include/spdk/crc16.h 00:02:47.432 TEST_HEADER include/spdk/crc32.h 00:02:47.432 CC app/nvmf_tgt/nvmf_main.o 00:02:47.432 TEST_HEADER include/spdk/crc64.h 00:02:47.432 TEST_HEADER include/spdk/dma.h 00:02:47.432 TEST_HEADER include/spdk/dif.h 00:02:47.432 TEST_HEADER include/spdk/endian.h 00:02:47.432 TEST_HEADER include/spdk/env_dpdk.h 00:02:47.432 TEST_HEADER include/spdk/env.h 00:02:47.432 CC app/spdk_tgt/spdk_tgt.o 00:02:47.432 TEST_HEADER include/spdk/event.h 00:02:47.432 TEST_HEADER include/spdk/fd_group.h 00:02:47.432 TEST_HEADER include/spdk/fd.h 00:02:47.432 TEST_HEADER include/spdk/file.h 00:02:47.432 TEST_HEADER include/spdk/ftl.h 00:02:47.432 TEST_HEADER include/spdk/gpt_spec.h 00:02:47.432 TEST_HEADER include/spdk/hexlify.h 00:02:47.432 TEST_HEADER include/spdk/idxd.h 00:02:47.432 TEST_HEADER include/spdk/idxd_spec.h 00:02:47.432 TEST_HEADER include/spdk/histogram_data.h 00:02:47.432 TEST_HEADER include/spdk/init.h 00:02:47.432 TEST_HEADER include/spdk/ioat.h 00:02:47.432 TEST_HEADER include/spdk/ioat_spec.h 00:02:47.432 TEST_HEADER include/spdk/iscsi_spec.h 00:02:47.432 TEST_HEADER include/spdk/json.h 00:02:47.432 TEST_HEADER include/spdk/jsonrpc.h 00:02:47.432 TEST_HEADER include/spdk/likely.h 00:02:47.432 TEST_HEADER include/spdk/log.h 00:02:47.432 TEST_HEADER include/spdk/lvol.h 00:02:47.432 TEST_HEADER include/spdk/memory.h 00:02:47.432 TEST_HEADER include/spdk/mmio.h 00:02:47.432 TEST_HEADER include/spdk/nbd.h 00:02:47.432 TEST_HEADER include/spdk/notify.h 00:02:47.432 TEST_HEADER include/spdk/nvme_intel.h 00:02:47.432 TEST_HEADER include/spdk/nvme.h 00:02:47.432 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:47.432 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:47.432 TEST_HEADER include/spdk/nvme_spec.h 00:02:47.432 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:47.432 TEST_HEADER include/spdk/nvme_zns.h 00:02:47.432 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:47.432 TEST_HEADER include/spdk/nvmf.h 00:02:47.432 TEST_HEADER include/spdk/nvmf_spec.h 00:02:47.432 TEST_HEADER include/spdk/opal.h 00:02:47.432 TEST_HEADER include/spdk/nvmf_transport.h 00:02:47.432 TEST_HEADER include/spdk/opal_spec.h 00:02:47.432 TEST_HEADER include/spdk/pci_ids.h 00:02:47.432 TEST_HEADER include/spdk/pipe.h 00:02:47.432 TEST_HEADER include/spdk/reduce.h 00:02:47.432 TEST_HEADER include/spdk/queue.h 00:02:47.432 TEST_HEADER include/spdk/rpc.h 00:02:47.432 TEST_HEADER include/spdk/scheduler.h 00:02:47.432 TEST_HEADER include/spdk/scsi_spec.h 00:02:47.432 TEST_HEADER include/spdk/scsi.h 00:02:47.432 TEST_HEADER include/spdk/sock.h 00:02:47.432 TEST_HEADER include/spdk/stdinc.h 00:02:47.432 TEST_HEADER include/spdk/string.h 00:02:47.432 TEST_HEADER include/spdk/thread.h 00:02:47.432 TEST_HEADER include/spdk/trace.h 00:02:47.432 TEST_HEADER include/spdk/tree.h 00:02:47.432 TEST_HEADER include/spdk/trace_parser.h 00:02:47.432 CC app/vhost/vhost.o 00:02:47.432 TEST_HEADER include/spdk/util.h 00:02:47.432 TEST_HEADER include/spdk/ublk.h 00:02:47.432 TEST_HEADER include/spdk/uuid.h 00:02:47.432 TEST_HEADER include/spdk/version.h 00:02:47.432 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:47.432 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:47.432 TEST_HEADER include/spdk/vmd.h 00:02:47.432 TEST_HEADER include/spdk/vhost.h 00:02:47.432 TEST_HEADER include/spdk/xor.h 00:02:47.432 TEST_HEADER include/spdk/zipf.h 00:02:47.432 CXX test/cpp_headers/accel_module.o 00:02:47.432 CXX test/cpp_headers/assert.o 00:02:47.432 CXX test/cpp_headers/accel.o 00:02:47.432 CXX test/cpp_headers/barrier.o 00:02:47.432 CXX test/cpp_headers/base64.o 00:02:47.432 CXX test/cpp_headers/bdev.o 00:02:47.432 CC examples/ioat/verify/verify.o 00:02:47.432 CXX test/cpp_headers/bdev_module.o 00:02:47.432 CXX test/cpp_headers/bit_array.o 00:02:47.432 CXX test/cpp_headers/bit_pool.o 00:02:47.432 CC test/env/vtophys/vtophys.o 00:02:47.432 CXX test/cpp_headers/bdev_zone.o 00:02:47.432 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:47.432 CXX test/cpp_headers/blobfs.o 00:02:47.432 CXX test/cpp_headers/blob_bdev.o 00:02:47.432 CXX test/cpp_headers/blob.o 00:02:47.432 CXX test/cpp_headers/blobfs_bdev.o 00:02:47.432 CC examples/util/zipf/zipf.o 00:02:47.432 CC test/event/reactor/reactor.o 00:02:47.432 CXX test/cpp_headers/conf.o 00:02:47.432 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:47.432 CC test/event/event_perf/event_perf.o 00:02:47.432 CC app/fio/nvme/fio_plugin.o 00:02:47.432 CXX test/cpp_headers/config.o 00:02:47.433 CXX test/cpp_headers/cpuset.o 00:02:47.433 CC examples/nvme/reconnect/reconnect.o 00:02:47.433 CC test/nvme/reset/reset.o 00:02:47.433 CC examples/nvme/hotplug/hotplug.o 00:02:47.433 CXX test/cpp_headers/crc32.o 00:02:47.701 CXX test/cpp_headers/crc16.o 00:02:47.701 CC test/app/jsoncat/jsoncat.o 00:02:47.701 CC examples/accel/perf/accel_perf.o 00:02:47.701 CXX test/cpp_headers/dif.o 00:02:47.701 CC test/nvme/startup/startup.o 00:02:47.701 CC test/event/reactor_perf/reactor_perf.o 00:02:47.701 CC examples/nvme/hello_world/hello_world.o 00:02:47.701 CXX test/cpp_headers/crc64.o 00:02:47.701 CXX test/cpp_headers/endian.o 00:02:47.701 CC test/nvme/aer/aer.o 00:02:47.701 CXX test/cpp_headers/dma.o 00:02:47.701 CXX test/cpp_headers/env.o 00:02:47.701 CXX test/cpp_headers/env_dpdk.o 00:02:47.701 CXX test/cpp_headers/event.o 00:02:47.701 CC test/accel/dif/dif.o 00:02:47.701 CC test/nvme/sgl/sgl.o 00:02:47.701 CXX test/cpp_headers/fd_group.o 00:02:47.701 CC examples/nvme/arbitration/arbitration.o 00:02:47.701 CC test/app/histogram_perf/histogram_perf.o 00:02:47.701 CC test/nvme/fdp/fdp.o 00:02:47.701 CXX test/cpp_headers/fd.o 00:02:47.701 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:47.701 CXX test/cpp_headers/ftl.o 00:02:47.701 CXX test/cpp_headers/file.o 00:02:47.701 CC examples/nvme/abort/abort.o 00:02:47.701 CXX test/cpp_headers/gpt_spec.o 00:02:47.701 CXX test/cpp_headers/histogram_data.o 00:02:47.701 CC test/app/stub/stub.o 00:02:47.701 CC test/nvme/err_injection/err_injection.o 00:02:47.701 CXX test/cpp_headers/idxd_spec.o 00:02:47.701 CXX test/cpp_headers/hexlify.o 00:02:47.701 CXX test/cpp_headers/idxd.o 00:02:47.701 CC test/nvme/overhead/overhead.o 00:02:47.701 CC test/thread/poller_perf/poller_perf.o 00:02:47.701 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:47.701 CXX test/cpp_headers/ioat.o 00:02:47.701 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:47.701 CC test/nvme/e2edp/nvme_dp.o 00:02:47.701 CC test/nvme/fused_ordering/fused_ordering.o 00:02:47.701 CC test/nvme/boot_partition/boot_partition.o 00:02:47.701 CC examples/vmd/lsvmd/lsvmd.o 00:02:47.701 CXX test/cpp_headers/ioat_spec.o 00:02:47.701 CXX test/cpp_headers/init.o 00:02:47.701 CXX test/cpp_headers/iscsi_spec.o 00:02:47.701 CC test/env/pci/pci_ut.o 00:02:47.701 CXX test/cpp_headers/jsonrpc.o 00:02:47.701 CXX test/cpp_headers/json.o 00:02:47.701 CC test/nvme/compliance/nvme_compliance.o 00:02:47.701 CC examples/idxd/perf/perf.o 00:02:47.701 CC test/nvme/simple_copy/simple_copy.o 00:02:47.701 CXX test/cpp_headers/likely.o 00:02:47.701 CXX test/cpp_headers/log.o 00:02:47.701 CXX test/cpp_headers/memory.o 00:02:47.701 CXX test/cpp_headers/lvol.o 00:02:47.701 CXX test/cpp_headers/mmio.o 00:02:47.701 CC test/event/app_repeat/app_repeat.o 00:02:47.701 CXX test/cpp_headers/nbd.o 00:02:47.701 CC examples/ioat/perf/perf.o 00:02:47.701 CC test/nvme/cuse/cuse.o 00:02:47.701 CXX test/cpp_headers/notify.o 00:02:47.701 CC examples/blob/cli/blobcli.o 00:02:47.701 CC examples/vmd/led/led.o 00:02:47.701 CXX test/cpp_headers/nvme.o 00:02:47.701 CXX test/cpp_headers/nvme_intel.o 00:02:47.701 CC test/env/memory/memory_ut.o 00:02:47.701 CC test/nvme/connect_stress/connect_stress.o 00:02:47.701 CC examples/bdev/hello_world/hello_bdev.o 00:02:47.701 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.701 CXX test/cpp_headers/nvme_ocssd.o 00:02:47.701 CXX test/cpp_headers/nvme_spec.o 00:02:47.701 CXX test/cpp_headers/nvme_zns.o 00:02:47.701 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:47.701 CC examples/sock/hello_world/hello_sock.o 00:02:47.701 CXX test/cpp_headers/nvmf_cmd.o 00:02:47.701 CXX test/cpp_headers/nvmf.o 00:02:47.701 CC test/nvme/reserve/reserve.o 00:02:47.701 CXX test/cpp_headers/nvmf_transport.o 00:02:47.701 CXX test/cpp_headers/nvmf_spec.o 00:02:47.701 CXX test/cpp_headers/opal.o 00:02:47.701 CC app/fio/bdev/fio_plugin.o 00:02:47.701 CXX test/cpp_headers/opal_spec.o 00:02:47.701 CC examples/nvmf/nvmf/nvmf.o 00:02:47.701 CXX test/cpp_headers/pipe.o 00:02:47.701 CC examples/blob/hello_world/hello_blob.o 00:02:47.701 CXX test/cpp_headers/pci_ids.o 00:02:47.701 CXX test/cpp_headers/queue.o 00:02:47.701 CXX test/cpp_headers/reduce.o 00:02:47.701 CXX test/cpp_headers/rpc.o 00:02:47.701 CXX test/cpp_headers/scheduler.o 00:02:47.701 CC examples/thread/thread/thread_ex.o 00:02:47.701 CXX test/cpp_headers/scsi.o 00:02:47.701 CC test/app/bdev_svc/bdev_svc.o 00:02:47.701 CC examples/bdev/bdevperf/bdevperf.o 00:02:47.701 CC test/blobfs/mkfs/mkfs.o 00:02:47.701 CC test/dma/test_dma/test_dma.o 00:02:47.701 CC test/bdev/bdevio/bdevio.o 00:02:47.701 CC test/event/scheduler/scheduler.o 00:02:47.701 CXX test/cpp_headers/scsi_spec.o 00:02:47.701 CXX test/cpp_headers/sock.o 00:02:47.701 LINK spdk_lspci 00:02:47.701 CC test/lvol/esnap/esnap.o 00:02:47.701 CC test/env/mem_callbacks/mem_callbacks.o 00:02:47.701 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.701 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.966 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:47.966 LINK rpc_client_test 00:02:47.966 LINK nvmf_tgt 00:02:47.966 LINK spdk_trace_record 00:02:47.966 LINK iscsi_tgt 00:02:47.966 LINK interrupt_tgt 00:02:47.966 LINK spdk_nvme_discover 00:02:47.966 LINK jsoncat 00:02:48.231 LINK spdk_tgt 00:02:48.231 LINK poller_perf 00:02:48.231 LINK vhost 00:02:48.231 LINK reactor 00:02:48.231 LINK vtophys 00:02:48.231 LINK zipf 00:02:48.231 LINK pmr_persistence 00:02:48.231 LINK event_perf 00:02:48.231 LINK cmb_copy 00:02:48.231 LINK app_repeat 00:02:48.231 LINK led 00:02:48.231 LINK lsvmd 00:02:48.231 LINK histogram_perf 00:02:48.231 LINK fused_ordering 00:02:48.231 LINK reactor_perf 00:02:48.231 LINK boot_partition 00:02:48.231 CXX test/cpp_headers/stdinc.o 00:02:48.231 LINK doorbell_aers 00:02:48.231 LINK hotplug 00:02:48.231 LINK connect_stress 00:02:48.231 LINK env_dpdk_post_init 00:02:48.231 LINK startup 00:02:48.231 LINK hello_bdev 00:02:48.231 LINK hello_blob 00:02:48.231 LINK verify 00:02:48.231 LINK bdev_svc 00:02:48.231 LINK stub 00:02:48.493 LINK reserve 00:02:48.493 LINK nvme_dp 00:02:48.493 LINK hello_world 00:02:48.493 LINK aer 00:02:48.493 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:48.493 LINK scheduler 00:02:48.493 LINK err_injection 00:02:48.494 CXX test/cpp_headers/string.o 00:02:48.494 CXX test/cpp_headers/thread.o 00:02:48.494 LINK simple_copy 00:02:48.494 LINK hello_sock 00:02:48.494 LINK ioat_perf 00:02:48.494 LINK fdp 00:02:48.494 CXX test/cpp_headers/trace.o 00:02:48.494 LINK mkfs 00:02:48.494 CXX test/cpp_headers/trace_parser.o 00:02:48.494 CXX test/cpp_headers/tree.o 00:02:48.494 LINK sgl 00:02:48.494 CXX test/cpp_headers/ublk.o 00:02:48.494 CXX test/cpp_headers/uuid.o 00:02:48.494 CXX test/cpp_headers/util.o 00:02:48.494 CXX test/cpp_headers/version.o 00:02:48.494 CXX test/cpp_headers/vfio_user_pci.o 00:02:48.494 CXX test/cpp_headers/vfio_user_spec.o 00:02:48.494 CXX test/cpp_headers/vhost.o 00:02:48.494 CXX test/cpp_headers/vmd.o 00:02:48.494 LINK thread 00:02:48.494 CXX test/cpp_headers/xor.o 00:02:48.494 CXX test/cpp_headers/zipf.o 00:02:48.494 LINK arbitration 00:02:48.494 LINK nvme_compliance 00:02:48.494 LINK reset 00:02:48.494 LINK spdk_dd 00:02:48.494 LINK overhead 00:02:48.494 LINK dif 00:02:48.494 LINK abort 00:02:48.494 LINK nvmf 00:02:48.494 LINK idxd_perf 00:02:48.494 LINK reconnect 00:02:48.494 LINK accel_perf 00:02:48.755 LINK spdk_trace 00:02:48.755 LINK blobcli 00:02:48.755 LINK pci_ut 00:02:48.755 LINK test_dma 00:02:48.755 LINK bdevio 00:02:48.755 LINK spdk_nvme 00:02:48.755 LINK nvme_fuzz 00:02:48.755 LINK spdk_bdev 00:02:48.755 LINK nvme_manage 00:02:48.755 LINK vhost_fuzz 00:02:48.755 LINK mem_callbacks 00:02:49.016 LINK spdk_nvme_identify 00:02:49.016 LINK spdk_nvme_perf 00:02:49.016 LINK bdevperf 00:02:49.016 LINK spdk_top 00:02:49.016 LINK memory_ut 00:02:49.016 LINK cuse 00:02:49.588 LINK iscsi_fuzz 00:02:50.973 LINK esnap 00:02:51.546 00:02:51.546 real 0m45.445s 00:02:51.546 user 6m16.496s 00:02:51.546 sys 4m25.150s 00:02:51.546 11:40:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:51.546 11:40:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.546 ************************************ 00:02:51.546 END TEST make 00:02:51.546 ************************************ 00:02:51.546 11:40:45 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:51.546 11:40:45 -- nvmf/common.sh@7 -- # uname -s 00:02:51.546 11:40:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:51.546 11:40:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:51.546 11:40:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:51.546 11:40:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:51.546 11:40:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:51.546 11:40:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:51.546 11:40:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:51.546 11:40:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:51.546 11:40:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:51.546 11:40:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:51.546 11:40:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:51.546 11:40:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:51.546 11:40:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:51.546 11:40:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:51.546 11:40:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:51.546 11:40:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:51.546 11:40:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:51.546 11:40:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:51.546 11:40:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:51.546 11:40:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.546 11:40:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.546 11:40:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.546 11:40:45 -- paths/export.sh@5 -- # export PATH 00:02:51.546 11:40:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.546 11:40:45 -- nvmf/common.sh@46 -- # : 0 00:02:51.546 11:40:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:51.546 11:40:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:51.546 11:40:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:51.546 11:40:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:51.546 11:40:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:51.546 11:40:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:51.546 11:40:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:51.546 11:40:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:51.546 11:40:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:51.546 11:40:45 -- spdk/autotest.sh@32 -- # uname -s 00:02:51.546 11:40:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:51.546 11:40:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:51.546 11:40:45 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:51.546 11:40:45 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:51.546 11:40:45 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:51.546 11:40:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:51.546 11:40:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:51.546 11:40:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:51.546 11:40:45 -- spdk/autotest.sh@48 -- # udevadm_pid=1669974 00:02:51.546 11:40:45 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:51.546 11:40:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:51.546 11:40:45 -- spdk/autotest.sh@54 -- # echo 1669976 00:02:51.546 11:40:45 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:51.546 11:40:45 -- spdk/autotest.sh@56 -- # echo 1669977 00:02:51.546 11:40:45 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:51.546 11:40:45 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:51.546 11:40:45 -- spdk/autotest.sh@60 -- # echo 1669978 00:02:51.546 11:40:45 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:51.546 11:40:45 -- spdk/autotest.sh@62 -- # echo 1669979 00:02:51.546 11:40:45 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:51.546 11:40:45 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:51.546 11:40:45 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:51.546 11:40:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:51.546 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:02:51.546 11:40:45 -- spdk/autotest.sh@70 -- # create_test_list 00:02:51.546 11:40:45 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:51.546 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:02:51.546 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:51.546 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:51.546 11:40:45 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:51.546 11:40:45 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.546 11:40:45 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.547 11:40:45 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:51.547 11:40:45 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.547 11:40:45 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:51.547 11:40:45 -- common/autotest_common.sh@1440 -- # uname 00:02:51.547 11:40:45 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:51.547 11:40:45 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:51.547 11:40:45 -- common/autotest_common.sh@1460 -- # uname 00:02:51.547 11:40:45 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:51.547 11:40:45 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:51.547 11:40:45 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:51.547 11:40:45 -- spdk/autotest.sh@83 -- # hash lcov 00:02:51.547 11:40:45 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:51.547 11:40:45 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:51.547 --rc lcov_branch_coverage=1 00:02:51.547 --rc lcov_function_coverage=1 00:02:51.547 --rc genhtml_branch_coverage=1 00:02:51.547 --rc genhtml_function_coverage=1 00:02:51.547 --rc genhtml_legend=1 00:02:51.547 --rc geninfo_all_blocks=1 00:02:51.547 ' 00:02:51.547 11:40:45 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:51.547 --rc lcov_branch_coverage=1 00:02:51.547 --rc lcov_function_coverage=1 00:02:51.547 --rc genhtml_branch_coverage=1 00:02:51.547 --rc genhtml_function_coverage=1 00:02:51.547 --rc genhtml_legend=1 00:02:51.547 --rc geninfo_all_blocks=1 00:02:51.547 ' 00:02:51.547 11:40:45 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:51.547 --rc lcov_branch_coverage=1 00:02:51.547 --rc lcov_function_coverage=1 00:02:51.547 --rc genhtml_branch_coverage=1 00:02:51.547 --rc genhtml_function_coverage=1 00:02:51.547 --rc genhtml_legend=1 00:02:51.547 --rc geninfo_all_blocks=1 00:02:51.547 --no-external' 00:02:51.547 11:40:45 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:51.547 --rc lcov_branch_coverage=1 00:02:51.547 --rc lcov_function_coverage=1 00:02:51.547 --rc genhtml_branch_coverage=1 00:02:51.547 --rc genhtml_function_coverage=1 00:02:51.547 --rc genhtml_legend=1 00:02:51.547 --rc geninfo_all_blocks=1 00:02:51.547 --no-external' 00:02:51.547 11:40:45 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:51.808 lcov: LCOV version 1.14 00:02:51.808 11:40:45 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:04.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:04.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:04.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:04.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:04.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:04.050 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:16.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:16.350 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:16.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:16.350 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:16.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:16.350 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:16.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:16.350 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:16.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:16.350 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:16.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:16.350 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:16.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:16.350 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:16.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:16.350 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:16.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:16.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:16.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:16.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:16.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:16.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:16.875 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:18.789 11:41:12 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:18.789 11:41:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:18.789 11:41:12 -- common/autotest_common.sh@10 -- # set +x 00:03:18.789 11:41:12 -- spdk/autotest.sh@102 -- # rm -f 00:03:18.789 11:41:12 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.094 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:22.094 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:22.094 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:22.094 11:41:15 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:22.094 11:41:15 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:22.094 11:41:15 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:22.094 11:41:15 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:22.094 11:41:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:22.094 11:41:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:22.094 11:41:15 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:22.094 11:41:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:22.094 11:41:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:22.094 11:41:15 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:22.094 11:41:15 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:22.094 11:41:15 -- spdk/autotest.sh@121 -- # grep -v p 00:03:22.094 11:41:15 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:22.094 11:41:15 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:22.094 11:41:15 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:22.094 11:41:15 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:22.094 11:41:15 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:22.356 No valid GPT data, bailing 00:03:22.356 11:41:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:22.356 11:41:15 -- scripts/common.sh@393 -- # pt= 00:03:22.356 11:41:15 -- scripts/common.sh@394 -- # return 1 00:03:22.356 11:41:15 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:22.356 1+0 records in 00:03:22.356 1+0 records out 00:03:22.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00194905 s, 538 MB/s 00:03:22.356 11:41:15 -- spdk/autotest.sh@129 -- # sync 00:03:22.356 11:41:15 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:22.356 11:41:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:22.356 11:41:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:30.498 11:41:23 -- spdk/autotest.sh@135 -- # uname -s 00:03:30.498 11:41:23 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:30.498 11:41:23 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:30.499 11:41:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.499 11:41:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.499 11:41:23 -- common/autotest_common.sh@10 -- # set +x 00:03:30.499 ************************************ 00:03:30.499 START TEST setup.sh 00:03:30.499 ************************************ 00:03:30.499 11:41:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:30.499 * Looking for test storage... 00:03:30.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.499 11:41:23 -- setup/test-setup.sh@10 -- # uname -s 00:03:30.499 11:41:23 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:30.499 11:41:23 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:30.499 11:41:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.499 11:41:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.499 11:41:23 -- common/autotest_common.sh@10 -- # set +x 00:03:30.499 ************************************ 00:03:30.499 START TEST acl 00:03:30.499 ************************************ 00:03:30.499 11:41:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:30.499 * Looking for test storage... 00:03:30.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.499 11:41:23 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:30.499 11:41:23 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:30.499 11:41:23 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:30.499 11:41:23 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:30.499 11:41:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:30.499 11:41:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:30.499 11:41:23 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:30.499 11:41:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:30.499 11:41:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:30.499 11:41:23 -- setup/acl.sh@12 -- # devs=() 00:03:30.499 11:41:23 -- setup/acl.sh@12 -- # declare -a devs 00:03:30.499 11:41:23 -- setup/acl.sh@13 -- # drivers=() 00:03:30.499 11:41:23 -- setup/acl.sh@13 -- # declare -A drivers 00:03:30.499 11:41:23 -- setup/acl.sh@51 -- # setup reset 00:03:30.499 11:41:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.499 11:41:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.704 11:41:27 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:34.704 11:41:27 -- setup/acl.sh@16 -- # local dev driver 00:03:34.704 11:41:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.704 11:41:27 -- setup/acl.sh@15 -- # setup output status 00:03:34.704 11:41:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.704 11:41:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:37.249 Hugepages 00:03:37.249 node hugesize free / total 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 00:03:37.249 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.249 11:41:30 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.249 11:41:30 -- setup/acl.sh@20 -- # continue 00:03:37.249 11:41:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.510 11:41:31 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:37.510 11:41:31 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:37.510 11:41:31 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:37.510 11:41:31 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:37.510 11:41:31 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:37.510 11:41:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.510 11:41:31 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:37.510 11:41:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.510 11:41:31 -- setup/acl.sh@20 -- # continue 00:03:37.510 11:41:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.510 11:41:31 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:37.510 11:41:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.510 11:41:31 -- setup/acl.sh@20 -- # continue 00:03:37.510 11:41:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.510 11:41:31 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:37.510 11:41:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.510 11:41:31 -- setup/acl.sh@20 -- # continue 00:03:37.510 11:41:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.510 11:41:31 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:37.510 11:41:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.510 11:41:31 -- setup/acl.sh@20 -- # continue 00:03:37.510 11:41:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.511 11:41:31 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:37.511 11:41:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.511 11:41:31 -- setup/acl.sh@20 -- # continue 00:03:37.511 11:41:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.511 11:41:31 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:37.511 11:41:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.511 11:41:31 -- setup/acl.sh@20 -- # continue 00:03:37.511 11:41:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.511 11:41:31 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:37.511 11:41:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.511 11:41:31 -- setup/acl.sh@20 -- # continue 00:03:37.511 11:41:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.511 11:41:31 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:37.511 11:41:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.511 11:41:31 -- setup/acl.sh@20 -- # continue 00:03:37.511 11:41:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.511 11:41:31 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:37.511 11:41:31 -- setup/acl.sh@54 -- # run_test denied denied 00:03:37.511 11:41:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:37.511 11:41:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:37.511 11:41:31 -- common/autotest_common.sh@10 -- # set +x 00:03:37.511 ************************************ 00:03:37.511 START TEST denied 00:03:37.511 ************************************ 00:03:37.511 11:41:31 -- common/autotest_common.sh@1104 -- # denied 00:03:37.511 11:41:31 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:37.511 11:41:31 -- setup/acl.sh@38 -- # setup output config 00:03:37.511 11:41:31 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:37.511 11:41:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.511 11:41:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.719 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:41.719 11:41:34 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:41.719 11:41:34 -- setup/acl.sh@28 -- # local dev driver 00:03:41.719 11:41:34 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:41.719 11:41:34 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:41.719 11:41:34 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:41.719 11:41:34 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:41.719 11:41:34 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:41.719 11:41:34 -- setup/acl.sh@41 -- # setup reset 00:03:41.719 11:41:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.719 11:41:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.934 00:03:45.934 real 0m8.196s 00:03:45.934 user 0m2.753s 00:03:45.934 sys 0m4.787s 00:03:45.934 11:41:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.934 11:41:39 -- common/autotest_common.sh@10 -- # set +x 00:03:45.934 ************************************ 00:03:45.934 END TEST denied 00:03:45.934 ************************************ 00:03:45.934 11:41:39 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:45.934 11:41:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.934 11:41:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.934 11:41:39 -- common/autotest_common.sh@10 -- # set +x 00:03:45.934 ************************************ 00:03:45.934 START TEST allowed 00:03:45.934 ************************************ 00:03:45.934 11:41:39 -- common/autotest_common.sh@1104 -- # allowed 00:03:45.934 11:41:39 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:45.934 11:41:39 -- setup/acl.sh@45 -- # setup output config 00:03:45.934 11:41:39 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:45.934 11:41:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.934 11:41:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.229 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:51.229 11:41:44 -- setup/acl.sh@47 -- # verify 00:03:51.229 11:41:44 -- setup/acl.sh@28 -- # local dev driver 00:03:51.229 11:41:44 -- setup/acl.sh@48 -- # setup reset 00:03:51.229 11:41:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.229 11:41:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.439 00:03:55.439 real 0m9.163s 00:03:55.439 user 0m2.766s 00:03:55.439 sys 0m4.715s 00:03:55.439 11:41:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.439 11:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:55.439 ************************************ 00:03:55.439 END TEST allowed 00:03:55.439 ************************************ 00:03:55.439 00:03:55.439 real 0m24.649s 00:03:55.439 user 0m8.086s 00:03:55.439 sys 0m14.409s 00:03:55.439 11:41:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.439 11:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:55.439 ************************************ 00:03:55.439 END TEST acl 00:03:55.439 ************************************ 00:03:55.439 11:41:48 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:55.440 11:41:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.440 11:41:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.440 11:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:55.440 ************************************ 00:03:55.440 START TEST hugepages 00:03:55.440 ************************************ 00:03:55.440 11:41:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:55.440 * Looking for test storage... 00:03:55.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.440 11:41:48 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:55.440 11:41:48 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:55.440 11:41:48 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:55.440 11:41:48 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:55.440 11:41:48 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:55.440 11:41:48 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:55.440 11:41:48 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:55.440 11:41:48 -- setup/common.sh@18 -- # local node= 00:03:55.440 11:41:48 -- setup/common.sh@19 -- # local var val 00:03:55.440 11:41:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.440 11:41:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.440 11:41:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.440 11:41:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.440 11:41:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.440 11:41:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107209588 kB' 'MemAvailable: 110551120 kB' 'Buffers: 4132 kB' 'Cached: 10212500 kB' 'SwapCached: 0 kB' 'Active: 7304820 kB' 'Inactive: 3525960 kB' 'Active(anon): 6814224 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617952 kB' 'Mapped: 205404 kB' 'Shmem: 6200076 kB' 'KReclaimable: 298612 kB' 'Slab: 1145908 kB' 'SReclaimable: 298612 kB' 'SUnreclaim: 847296 kB' 'KernelStack: 27584 kB' 'PageTables: 9712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460884 kB' 'Committed_AS: 8412400 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235964 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.440 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.440 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # continue 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.441 11:41:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.441 11:41:48 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.441 11:41:48 -- setup/common.sh@33 -- # echo 2048 00:03:55.441 11:41:48 -- setup/common.sh@33 -- # return 0 00:03:55.441 11:41:48 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:55.441 11:41:48 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:55.441 11:41:48 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:55.441 11:41:48 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:55.441 11:41:48 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:55.441 11:41:48 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:55.441 11:41:48 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:55.441 11:41:48 -- setup/hugepages.sh@207 -- # get_nodes 00:03:55.441 11:41:48 -- setup/hugepages.sh@27 -- # local node 00:03:55.441 11:41:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.441 11:41:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:55.441 11:41:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.441 11:41:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.441 11:41:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.441 11:41:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.441 11:41:48 -- setup/hugepages.sh@208 -- # clear_hp 00:03:55.441 11:41:48 -- setup/hugepages.sh@37 -- # local node hp 00:03:55.441 11:41:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.441 11:41:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.441 11:41:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:55.441 11:41:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.441 11:41:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:55.441 11:41:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.441 11:41:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.441 11:41:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:55.441 11:41:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.441 11:41:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:55.441 11:41:48 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.441 11:41:48 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.441 11:41:48 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:55.441 11:41:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.441 11:41:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.441 11:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:55.441 ************************************ 00:03:55.441 START TEST default_setup 00:03:55.441 ************************************ 00:03:55.441 11:41:48 -- common/autotest_common.sh@1104 -- # default_setup 00:03:55.441 11:41:48 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:55.441 11:41:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.441 11:41:48 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.441 11:41:48 -- setup/hugepages.sh@51 -- # shift 00:03:55.441 11:41:48 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:55.441 11:41:48 -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.441 11:41:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.441 11:41:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.441 11:41:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.441 11:41:48 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:55.441 11:41:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.441 11:41:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.441 11:41:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.441 11:41:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.441 11:41:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.441 11:41:48 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.441 11:41:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.441 11:41:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.441 11:41:48 -- setup/hugepages.sh@73 -- # return 0 00:03:55.441 11:41:48 -- setup/hugepages.sh@137 -- # setup output 00:03:55.441 11:41:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.441 11:41:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.746 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:58.747 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:58.747 11:41:52 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:58.747 11:41:52 -- setup/hugepages.sh@89 -- # local node 00:03:58.747 11:41:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.747 11:41:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.747 11:41:52 -- setup/hugepages.sh@92 -- # local surp 00:03:58.747 11:41:52 -- setup/hugepages.sh@93 -- # local resv 00:03:58.747 11:41:52 -- setup/hugepages.sh@94 -- # local anon 00:03:58.747 11:41:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.747 11:41:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.747 11:41:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.747 11:41:52 -- setup/common.sh@18 -- # local node= 00:03:58.747 11:41:52 -- setup/common.sh@19 -- # local var val 00:03:58.747 11:41:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.747 11:41:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.747 11:41:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.747 11:41:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.747 11:41:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.747 11:41:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109386792 kB' 'MemAvailable: 112727892 kB' 'Buffers: 4132 kB' 'Cached: 10212628 kB' 'SwapCached: 0 kB' 'Active: 7321516 kB' 'Inactive: 3525960 kB' 'Active(anon): 6830920 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633668 kB' 'Mapped: 205796 kB' 'Shmem: 6200204 kB' 'KReclaimable: 297748 kB' 'Slab: 1143236 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845488 kB' 'KernelStack: 27584 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8428884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235980 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.747 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.747 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.748 11:41:52 -- setup/common.sh@33 -- # echo 0 00:03:58.748 11:41:52 -- setup/common.sh@33 -- # return 0 00:03:58.748 11:41:52 -- setup/hugepages.sh@97 -- # anon=0 00:03:58.748 11:41:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.748 11:41:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.748 11:41:52 -- setup/common.sh@18 -- # local node= 00:03:58.748 11:41:52 -- setup/common.sh@19 -- # local var val 00:03:58.748 11:41:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.748 11:41:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.748 11:41:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.748 11:41:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.748 11:41:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.748 11:41:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109388412 kB' 'MemAvailable: 112729512 kB' 'Buffers: 4132 kB' 'Cached: 10212632 kB' 'SwapCached: 0 kB' 'Active: 7320804 kB' 'Inactive: 3525960 kB' 'Active(anon): 6830208 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633372 kB' 'Mapped: 205676 kB' 'Shmem: 6200208 kB' 'KReclaimable: 297748 kB' 'Slab: 1143200 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845452 kB' 'KernelStack: 27584 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8428896 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235948 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.748 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.748 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.749 11:41:52 -- setup/common.sh@33 -- # echo 0 00:03:58.749 11:41:52 -- setup/common.sh@33 -- # return 0 00:03:58.749 11:41:52 -- setup/hugepages.sh@99 -- # surp=0 00:03:58.749 11:41:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.749 11:41:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.749 11:41:52 -- setup/common.sh@18 -- # local node= 00:03:58.749 11:41:52 -- setup/common.sh@19 -- # local var val 00:03:58.749 11:41:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.749 11:41:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.749 11:41:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.749 11:41:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.749 11:41:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.749 11:41:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.749 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.749 11:41:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109390148 kB' 'MemAvailable: 112731248 kB' 'Buffers: 4132 kB' 'Cached: 10212644 kB' 'SwapCached: 0 kB' 'Active: 7320768 kB' 'Inactive: 3525960 kB' 'Active(anon): 6830172 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633372 kB' 'Mapped: 205676 kB' 'Shmem: 6200220 kB' 'KReclaimable: 297748 kB' 'Slab: 1143200 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845452 kB' 'KernelStack: 27584 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8428912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235948 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:03:58.749 11:41:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.750 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.750 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.751 11:41:52 -- setup/common.sh@33 -- # echo 0 00:03:58.751 11:41:52 -- setup/common.sh@33 -- # return 0 00:03:58.751 11:41:52 -- setup/hugepages.sh@100 -- # resv=0 00:03:58.751 11:41:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.751 nr_hugepages=1024 00:03:58.751 11:41:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.751 resv_hugepages=0 00:03:58.751 11:41:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.751 surplus_hugepages=0 00:03:58.751 11:41:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.751 anon_hugepages=0 00:03:58.751 11:41:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.751 11:41:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.751 11:41:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.751 11:41:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.751 11:41:52 -- setup/common.sh@18 -- # local node= 00:03:58.751 11:41:52 -- setup/common.sh@19 -- # local var val 00:03:58.751 11:41:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.751 11:41:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.751 11:41:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.751 11:41:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.751 11:41:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.751 11:41:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109391516 kB' 'MemAvailable: 112732616 kB' 'Buffers: 4132 kB' 'Cached: 10212672 kB' 'SwapCached: 0 kB' 'Active: 7320748 kB' 'Inactive: 3525960 kB' 'Active(anon): 6830152 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633304 kB' 'Mapped: 205676 kB' 'Shmem: 6200248 kB' 'KReclaimable: 297748 kB' 'Slab: 1143200 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845452 kB' 'KernelStack: 27568 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8428928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235948 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.751 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.751 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.752 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.752 11:41:52 -- setup/common.sh@33 -- # echo 1024 00:03:58.752 11:41:52 -- setup/common.sh@33 -- # return 0 00:03:58.752 11:41:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.752 11:41:52 -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.752 11:41:52 -- setup/hugepages.sh@27 -- # local node 00:03:58.752 11:41:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.752 11:41:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.752 11:41:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.752 11:41:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.752 11:41:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.752 11:41:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.752 11:41:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.752 11:41:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.752 11:41:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.752 11:41:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.752 11:41:52 -- setup/common.sh@18 -- # local node=0 00:03:58.752 11:41:52 -- setup/common.sh@19 -- # local var val 00:03:58.752 11:41:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.752 11:41:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.752 11:41:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.752 11:41:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.752 11:41:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.752 11:41:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.752 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52783208 kB' 'MemUsed: 12875800 kB' 'SwapCached: 0 kB' 'Active: 5143564 kB' 'Inactive: 3325564 kB' 'Active(anon): 4808108 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8246180 kB' 'Mapped: 117772 kB' 'AnonPages: 226292 kB' 'Shmem: 4585160 kB' 'KernelStack: 14568 kB' 'PageTables: 5772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169508 kB' 'Slab: 640140 kB' 'SReclaimable: 169508 kB' 'SUnreclaim: 470632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # continue 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.753 11:41:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.753 11:41:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.753 11:41:52 -- setup/common.sh@33 -- # echo 0 00:03:58.753 11:41:52 -- setup/common.sh@33 -- # return 0 00:03:58.753 11:41:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.753 11:41:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.753 11:41:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.753 11:41:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.753 11:41:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.753 node0=1024 expecting 1024 00:03:58.753 11:41:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.753 00:03:58.753 real 0m3.792s 00:03:58.753 user 0m1.487s 00:03:58.753 sys 0m2.310s 00:03:58.753 11:41:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.753 11:41:52 -- common/autotest_common.sh@10 -- # set +x 00:03:58.753 ************************************ 00:03:58.754 END TEST default_setup 00:03:58.754 ************************************ 00:03:59.014 11:41:52 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:59.014 11:41:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.014 11:41:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.015 11:41:52 -- common/autotest_common.sh@10 -- # set +x 00:03:59.015 ************************************ 00:03:59.015 START TEST per_node_1G_alloc 00:03:59.015 ************************************ 00:03:59.015 11:41:52 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:59.015 11:41:52 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:59.015 11:41:52 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:59.015 11:41:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.015 11:41:52 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:59.015 11:41:52 -- setup/hugepages.sh@51 -- # shift 00:03:59.015 11:41:52 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:59.015 11:41:52 -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.015 11:41:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.015 11:41:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.015 11:41:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:59.015 11:41:52 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:59.015 11:41:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.015 11:41:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.015 11:41:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.015 11:41:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.015 11:41:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.015 11:41:52 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:59.015 11:41:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.015 11:41:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:59.015 11:41:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.015 11:41:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:59.015 11:41:52 -- setup/hugepages.sh@73 -- # return 0 00:03:59.015 11:41:52 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:59.015 11:41:52 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:59.015 11:41:52 -- setup/hugepages.sh@146 -- # setup output 00:03:59.015 11:41:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.015 11:41:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.319 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.319 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.319 11:41:56 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:02.319 11:41:56 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:02.319 11:41:56 -- setup/hugepages.sh@89 -- # local node 00:04:02.319 11:41:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.319 11:41:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.319 11:41:56 -- setup/hugepages.sh@92 -- # local surp 00:04:02.319 11:41:56 -- setup/hugepages.sh@93 -- # local resv 00:04:02.319 11:41:56 -- setup/hugepages.sh@94 -- # local anon 00:04:02.319 11:41:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.319 11:41:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.319 11:41:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.319 11:41:56 -- setup/common.sh@18 -- # local node= 00:04:02.319 11:41:56 -- setup/common.sh@19 -- # local var val 00:04:02.319 11:41:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.319 11:41:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.319 11:41:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.319 11:41:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.319 11:41:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.319 11:41:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109406656 kB' 'MemAvailable: 112747756 kB' 'Buffers: 4132 kB' 'Cached: 10212764 kB' 'SwapCached: 0 kB' 'Active: 7319552 kB' 'Inactive: 3525960 kB' 'Active(anon): 6828956 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631388 kB' 'Mapped: 204552 kB' 'Shmem: 6200340 kB' 'KReclaimable: 297748 kB' 'Slab: 1143052 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845304 kB' 'KernelStack: 27504 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8415076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235900 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.319 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.320 11:41:56 -- setup/common.sh@33 -- # echo 0 00:04:02.320 11:41:56 -- setup/common.sh@33 -- # return 0 00:04:02.320 11:41:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.320 11:41:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.320 11:41:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.320 11:41:56 -- setup/common.sh@18 -- # local node= 00:04:02.320 11:41:56 -- setup/common.sh@19 -- # local var val 00:04:02.320 11:41:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.320 11:41:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.320 11:41:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.320 11:41:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.320 11:41:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.320 11:41:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109406664 kB' 'MemAvailable: 112747764 kB' 'Buffers: 4132 kB' 'Cached: 10212764 kB' 'SwapCached: 0 kB' 'Active: 7319216 kB' 'Inactive: 3525960 kB' 'Active(anon): 6828620 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631112 kB' 'Mapped: 204544 kB' 'Shmem: 6200340 kB' 'KReclaimable: 297748 kB' 'Slab: 1142980 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845232 kB' 'KernelStack: 27504 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8415088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235884 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.320 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.585 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.585 11:41:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.585 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.585 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.585 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.585 11:41:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.585 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.585 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.585 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.585 11:41:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.585 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.585 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.585 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.585 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.585 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.585 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.585 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.586 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.587 11:41:56 -- setup/common.sh@33 -- # echo 0 00:04:02.587 11:41:56 -- setup/common.sh@33 -- # return 0 00:04:02.587 11:41:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.587 11:41:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.587 11:41:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.587 11:41:56 -- setup/common.sh@18 -- # local node= 00:04:02.587 11:41:56 -- setup/common.sh@19 -- # local var val 00:04:02.587 11:41:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.587 11:41:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.587 11:41:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.587 11:41:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.587 11:41:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.587 11:41:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.587 11:41:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109410216 kB' 'MemAvailable: 112751316 kB' 'Buffers: 4132 kB' 'Cached: 10212776 kB' 'SwapCached: 0 kB' 'Active: 7318792 kB' 'Inactive: 3525960 kB' 'Active(anon): 6828196 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631144 kB' 'Mapped: 204468 kB' 'Shmem: 6200352 kB' 'KReclaimable: 297748 kB' 'Slab: 1142924 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845176 kB' 'KernelStack: 27488 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8414732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235820 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 11:41:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.588 11:41:56 -- setup/common.sh@33 -- # echo 0 00:04:02.588 11:41:56 -- setup/common.sh@33 -- # return 0 00:04:02.588 11:41:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.588 11:41:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.588 nr_hugepages=1024 00:04:02.588 11:41:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.588 resv_hugepages=0 00:04:02.588 11:41:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.588 surplus_hugepages=0 00:04:02.588 11:41:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.588 anon_hugepages=0 00:04:02.588 11:41:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.588 11:41:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.588 11:41:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.588 11:41:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.588 11:41:56 -- setup/common.sh@18 -- # local node= 00:04:02.588 11:41:56 -- setup/common.sh@19 -- # local var val 00:04:02.588 11:41:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.588 11:41:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.588 11:41:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.588 11:41:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.588 11:41:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.588 11:41:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109409668 kB' 'MemAvailable: 112750768 kB' 'Buffers: 4132 kB' 'Cached: 10212808 kB' 'SwapCached: 0 kB' 'Active: 7318064 kB' 'Inactive: 3525960 kB' 'Active(anon): 6827468 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630364 kB' 'Mapped: 204468 kB' 'Shmem: 6200384 kB' 'KReclaimable: 297748 kB' 'Slab: 1142900 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845152 kB' 'KernelStack: 27440 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8414752 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235820 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.589 11:41:56 -- setup/common.sh@33 -- # echo 1024 00:04:02.589 11:41:56 -- setup/common.sh@33 -- # return 0 00:04:02.589 11:41:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.589 11:41:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.589 11:41:56 -- setup/hugepages.sh@27 -- # local node 00:04:02.589 11:41:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.589 11:41:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.589 11:41:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.589 11:41:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.589 11:41:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.589 11:41:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.589 11:41:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.589 11:41:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.589 11:41:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.589 11:41:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.589 11:41:56 -- setup/common.sh@18 -- # local node=0 00:04:02.589 11:41:56 -- setup/common.sh@19 -- # local var val 00:04:02.589 11:41:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.589 11:41:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.589 11:41:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.589 11:41:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.590 11:41:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.590 11:41:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53851504 kB' 'MemUsed: 11807504 kB' 'SwapCached: 0 kB' 'Active: 5140844 kB' 'Inactive: 3325564 kB' 'Active(anon): 4805388 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8246268 kB' 'Mapped: 117260 kB' 'AnonPages: 223404 kB' 'Shmem: 4585248 kB' 'KernelStack: 14472 kB' 'PageTables: 5436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169508 kB' 'Slab: 639772 kB' 'SReclaimable: 169508 kB' 'SUnreclaim: 470264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.590 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.590 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@33 -- # echo 0 00:04:02.591 11:41:56 -- setup/common.sh@33 -- # return 0 00:04:02.591 11:41:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.591 11:41:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.591 11:41:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.591 11:41:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.591 11:41:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.591 11:41:56 -- setup/common.sh@18 -- # local node=1 00:04:02.591 11:41:56 -- setup/common.sh@19 -- # local var val 00:04:02.591 11:41:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.591 11:41:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.591 11:41:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.591 11:41:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.591 11:41:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.591 11:41:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 55558164 kB' 'MemUsed: 5121692 kB' 'SwapCached: 0 kB' 'Active: 2177228 kB' 'Inactive: 200396 kB' 'Active(anon): 2022088 kB' 'Inactive(anon): 0 kB' 'Active(file): 155140 kB' 'Inactive(file): 200396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1970688 kB' 'Mapped: 87208 kB' 'AnonPages: 406960 kB' 'Shmem: 1615152 kB' 'KernelStack: 12968 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128240 kB' 'Slab: 503128 kB' 'SReclaimable: 128240 kB' 'SUnreclaim: 374888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.591 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.591 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # continue 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.592 11:41:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.592 11:41:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.592 11:41:56 -- setup/common.sh@33 -- # echo 0 00:04:02.592 11:41:56 -- setup/common.sh@33 -- # return 0 00:04:02.592 11:41:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.592 11:41:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.592 11:41:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.592 11:41:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.592 11:41:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.592 node0=512 expecting 512 00:04:02.592 11:41:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.592 11:41:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.592 11:41:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.592 11:41:56 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:02.592 node1=512 expecting 512 00:04:02.592 11:41:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:02.592 00:04:02.592 real 0m3.704s 00:04:02.592 user 0m1.487s 00:04:02.592 sys 0m2.280s 00:04:02.592 11:41:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.592 11:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:02.592 ************************************ 00:04:02.592 END TEST per_node_1G_alloc 00:04:02.592 ************************************ 00:04:02.592 11:41:56 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:02.592 11:41:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:02.592 11:41:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:02.592 11:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:02.592 ************************************ 00:04:02.592 START TEST even_2G_alloc 00:04:02.592 ************************************ 00:04:02.592 11:41:56 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:02.592 11:41:56 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:02.592 11:41:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.592 11:41:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.592 11:41:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.592 11:41:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.592 11:41:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.592 11:41:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.592 11:41:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.592 11:41:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.592 11:41:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.592 11:41:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.592 11:41:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.592 11:41:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.592 11:41:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.592 11:41:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.592 11:41:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:02.592 11:41:56 -- setup/hugepages.sh@83 -- # : 512 00:04:02.592 11:41:56 -- setup/hugepages.sh@84 -- # : 1 00:04:02.592 11:41:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.592 11:41:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:02.592 11:41:56 -- setup/hugepages.sh@83 -- # : 0 00:04:02.592 11:41:56 -- setup/hugepages.sh@84 -- # : 0 00:04:02.592 11:41:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.592 11:41:56 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:02.592 11:41:56 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:02.592 11:41:56 -- setup/hugepages.sh@153 -- # setup output 00:04:02.592 11:41:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.592 11:41:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.972 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:05.972 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.972 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.237 11:41:59 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:06.237 11:41:59 -- setup/hugepages.sh@89 -- # local node 00:04:06.238 11:41:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.238 11:41:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.238 11:41:59 -- setup/hugepages.sh@92 -- # local surp 00:04:06.238 11:41:59 -- setup/hugepages.sh@93 -- # local resv 00:04:06.238 11:41:59 -- setup/hugepages.sh@94 -- # local anon 00:04:06.238 11:41:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.238 11:41:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.238 11:41:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.238 11:41:59 -- setup/common.sh@18 -- # local node= 00:04:06.238 11:41:59 -- setup/common.sh@19 -- # local var val 00:04:06.238 11:41:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.238 11:41:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.238 11:41:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.238 11:41:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.238 11:41:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.238 11:41:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109434744 kB' 'MemAvailable: 112775844 kB' 'Buffers: 4132 kB' 'Cached: 10212908 kB' 'SwapCached: 0 kB' 'Active: 7319856 kB' 'Inactive: 3525960 kB' 'Active(anon): 6829260 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632124 kB' 'Mapped: 204532 kB' 'Shmem: 6200484 kB' 'KReclaimable: 297748 kB' 'Slab: 1143280 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845532 kB' 'KernelStack: 27488 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8415500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235884 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.238 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.239 11:41:59 -- setup/common.sh@33 -- # echo 0 00:04:06.239 11:41:59 -- setup/common.sh@33 -- # return 0 00:04:06.239 11:41:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.239 11:41:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.239 11:41:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.239 11:41:59 -- setup/common.sh@18 -- # local node= 00:04:06.239 11:41:59 -- setup/common.sh@19 -- # local var val 00:04:06.239 11:41:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.239 11:41:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.239 11:41:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.239 11:41:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.239 11:41:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.239 11:41:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109436800 kB' 'MemAvailable: 112777900 kB' 'Buffers: 4132 kB' 'Cached: 10212912 kB' 'SwapCached: 0 kB' 'Active: 7319844 kB' 'Inactive: 3525960 kB' 'Active(anon): 6829248 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632132 kB' 'Mapped: 204480 kB' 'Shmem: 6200488 kB' 'KReclaimable: 297748 kB' 'Slab: 1143244 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845496 kB' 'KernelStack: 27536 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8416400 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235868 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.239 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 11:41:59 -- setup/common.sh@33 -- # echo 0 00:04:06.240 11:41:59 -- setup/common.sh@33 -- # return 0 00:04:06.240 11:41:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.240 11:41:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.240 11:41:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.240 11:41:59 -- setup/common.sh@18 -- # local node= 00:04:06.240 11:41:59 -- setup/common.sh@19 -- # local var val 00:04:06.240 11:41:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.240 11:41:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.240 11:41:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.240 11:41:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.240 11:41:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.240 11:41:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109437452 kB' 'MemAvailable: 112778552 kB' 'Buffers: 4132 kB' 'Cached: 10212916 kB' 'SwapCached: 0 kB' 'Active: 7319528 kB' 'Inactive: 3525960 kB' 'Active(anon): 6828932 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631808 kB' 'Mapped: 204540 kB' 'Shmem: 6200492 kB' 'KReclaimable: 297748 kB' 'Slab: 1143264 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845516 kB' 'KernelStack: 27488 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8415664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.241 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.242 11:41:59 -- setup/common.sh@33 -- # echo 0 00:04:06.242 11:41:59 -- setup/common.sh@33 -- # return 0 00:04:06.242 11:41:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.242 11:41:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.242 nr_hugepages=1024 00:04:06.242 11:41:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.242 resv_hugepages=0 00:04:06.242 11:41:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.242 surplus_hugepages=0 00:04:06.242 11:41:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.242 anon_hugepages=0 00:04:06.242 11:41:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.242 11:41:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.242 11:41:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.242 11:41:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.242 11:41:59 -- setup/common.sh@18 -- # local node= 00:04:06.242 11:41:59 -- setup/common.sh@19 -- # local var val 00:04:06.242 11:41:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.242 11:41:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.242 11:41:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.242 11:41:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.242 11:41:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.242 11:41:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109437584 kB' 'MemAvailable: 112778684 kB' 'Buffers: 4132 kB' 'Cached: 10212920 kB' 'SwapCached: 0 kB' 'Active: 7319372 kB' 'Inactive: 3525960 kB' 'Active(anon): 6828776 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631592 kB' 'Mapped: 204480 kB' 'Shmem: 6200496 kB' 'KReclaimable: 297748 kB' 'Slab: 1143264 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 845516 kB' 'KernelStack: 27424 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8415676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235852 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.242 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.242 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.243 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.243 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.244 11:41:59 -- setup/common.sh@33 -- # echo 1024 00:04:06.244 11:41:59 -- setup/common.sh@33 -- # return 0 00:04:06.244 11:41:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.244 11:41:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.244 11:41:59 -- setup/hugepages.sh@27 -- # local node 00:04:06.244 11:41:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.244 11:41:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.244 11:41:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.244 11:41:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.244 11:41:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.244 11:41:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.244 11:41:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.244 11:41:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.244 11:41:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.244 11:41:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.244 11:41:59 -- setup/common.sh@18 -- # local node=0 00:04:06.244 11:41:59 -- setup/common.sh@19 -- # local var val 00:04:06.244 11:41:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.244 11:41:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.244 11:41:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.244 11:41:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.244 11:41:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.244 11:41:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53868764 kB' 'MemUsed: 11790244 kB' 'SwapCached: 0 kB' 'Active: 5141272 kB' 'Inactive: 3325564 kB' 'Active(anon): 4805816 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8246356 kB' 'Mapped: 117260 kB' 'AnonPages: 223676 kB' 'Shmem: 4585336 kB' 'KernelStack: 14456 kB' 'PageTables: 5392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169508 kB' 'Slab: 639764 kB' 'SReclaimable: 169508 kB' 'SUnreclaim: 470256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.244 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.244 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@33 -- # echo 0 00:04:06.245 11:41:59 -- setup/common.sh@33 -- # return 0 00:04:06.245 11:41:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.245 11:41:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.245 11:41:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.245 11:41:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.245 11:41:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.245 11:41:59 -- setup/common.sh@18 -- # local node=1 00:04:06.245 11:41:59 -- setup/common.sh@19 -- # local var val 00:04:06.245 11:41:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.245 11:41:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.245 11:41:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.245 11:41:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.245 11:41:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.245 11:41:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 55568480 kB' 'MemUsed: 5111376 kB' 'SwapCached: 0 kB' 'Active: 2178184 kB' 'Inactive: 200396 kB' 'Active(anon): 2023044 kB' 'Inactive(anon): 0 kB' 'Active(file): 155140 kB' 'Inactive(file): 200396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1970740 kB' 'Mapped: 87220 kB' 'AnonPages: 407996 kB' 'Shmem: 1615204 kB' 'KernelStack: 13032 kB' 'PageTables: 3428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128240 kB' 'Slab: 503500 kB' 'SReclaimable: 128240 kB' 'SUnreclaim: 375260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.245 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.245 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # continue 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.246 11:41:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.246 11:41:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.246 11:41:59 -- setup/common.sh@33 -- # echo 0 00:04:06.246 11:41:59 -- setup/common.sh@33 -- # return 0 00:04:06.246 11:41:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.246 11:41:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.246 11:41:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.246 11:41:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.246 11:41:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.246 node0=512 expecting 512 00:04:06.246 11:41:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.246 11:41:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.246 11:41:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.246 11:41:59 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:06.246 node1=512 expecting 512 00:04:06.246 11:41:59 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:06.246 00:04:06.246 real 0m3.612s 00:04:06.246 user 0m1.422s 00:04:06.246 sys 0m2.251s 00:04:06.246 11:41:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.246 11:41:59 -- common/autotest_common.sh@10 -- # set +x 00:04:06.246 ************************************ 00:04:06.246 END TEST even_2G_alloc 00:04:06.246 ************************************ 00:04:06.246 11:41:59 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:06.246 11:41:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.246 11:41:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.246 11:41:59 -- common/autotest_common.sh@10 -- # set +x 00:04:06.246 ************************************ 00:04:06.246 START TEST odd_alloc 00:04:06.246 ************************************ 00:04:06.246 11:41:59 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:06.246 11:41:59 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:06.246 11:41:59 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:06.246 11:41:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.246 11:41:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.246 11:41:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:06.246 11:41:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.246 11:41:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.246 11:41:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.246 11:41:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:06.246 11:41:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.246 11:41:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.246 11:41:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.246 11:41:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.246 11:41:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.246 11:41:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.246 11:41:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:06.246 11:41:59 -- setup/hugepages.sh@83 -- # : 513 00:04:06.246 11:41:59 -- setup/hugepages.sh@84 -- # : 1 00:04:06.246 11:41:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.246 11:41:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:06.246 11:41:59 -- setup/hugepages.sh@83 -- # : 0 00:04:06.246 11:41:59 -- setup/hugepages.sh@84 -- # : 0 00:04:06.246 11:41:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.246 11:41:59 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:06.246 11:41:59 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:06.246 11:41:59 -- setup/hugepages.sh@160 -- # setup output 00:04:06.246 11:41:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.246 11:41:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.546 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.546 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.546 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.546 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.546 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:09.818 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.818 11:42:03 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:09.818 11:42:03 -- setup/hugepages.sh@89 -- # local node 00:04:09.818 11:42:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.818 11:42:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.818 11:42:03 -- setup/hugepages.sh@92 -- # local surp 00:04:09.818 11:42:03 -- setup/hugepages.sh@93 -- # local resv 00:04:09.818 11:42:03 -- setup/hugepages.sh@94 -- # local anon 00:04:09.818 11:42:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.818 11:42:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.818 11:42:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.818 11:42:03 -- setup/common.sh@18 -- # local node= 00:04:09.818 11:42:03 -- setup/common.sh@19 -- # local var val 00:04:09.818 11:42:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.818 11:42:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.818 11:42:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.818 11:42:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.818 11:42:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.818 11:42:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109362428 kB' 'MemAvailable: 112703528 kB' 'Buffers: 4132 kB' 'Cached: 10213060 kB' 'SwapCached: 0 kB' 'Active: 7329768 kB' 'Inactive: 3525960 kB' 'Active(anon): 6839172 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641908 kB' 'Mapped: 205460 kB' 'Shmem: 6200636 kB' 'KReclaimable: 297748 kB' 'Slab: 1142384 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 844636 kB' 'KernelStack: 27808 kB' 'PageTables: 9856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8428892 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236096 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.818 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.818 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.819 11:42:03 -- setup/common.sh@33 -- # echo 0 00:04:09.819 11:42:03 -- setup/common.sh@33 -- # return 0 00:04:09.819 11:42:03 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.819 11:42:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.819 11:42:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.819 11:42:03 -- setup/common.sh@18 -- # local node= 00:04:09.819 11:42:03 -- setup/common.sh@19 -- # local var val 00:04:09.819 11:42:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.819 11:42:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.819 11:42:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.819 11:42:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.819 11:42:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.819 11:42:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109362960 kB' 'MemAvailable: 112704060 kB' 'Buffers: 4132 kB' 'Cached: 10213068 kB' 'SwapCached: 0 kB' 'Active: 7330300 kB' 'Inactive: 3525960 kB' 'Active(anon): 6839704 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641912 kB' 'Mapped: 205536 kB' 'Shmem: 6200644 kB' 'KReclaimable: 297748 kB' 'Slab: 1142456 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 844708 kB' 'KernelStack: 27648 kB' 'PageTables: 9484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8430684 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236016 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 11:42:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 11:42:03 -- setup/common.sh@33 -- # echo 0 00:04:09.821 11:42:03 -- setup/common.sh@33 -- # return 0 00:04:09.821 11:42:03 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.821 11:42:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.821 11:42:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.821 11:42:03 -- setup/common.sh@18 -- # local node= 00:04:09.821 11:42:03 -- setup/common.sh@19 -- # local var val 00:04:09.821 11:42:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.821 11:42:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.821 11:42:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.821 11:42:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.821 11:42:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.821 11:42:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109366112 kB' 'MemAvailable: 112707212 kB' 'Buffers: 4132 kB' 'Cached: 10213084 kB' 'SwapCached: 0 kB' 'Active: 7328556 kB' 'Inactive: 3525960 kB' 'Active(anon): 6837960 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640624 kB' 'Mapped: 205408 kB' 'Shmem: 6200660 kB' 'KReclaimable: 297748 kB' 'Slab: 1142460 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 844712 kB' 'KernelStack: 27584 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8429060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235936 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.821 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.822 11:42:03 -- setup/common.sh@33 -- # echo 0 00:04:09.822 11:42:03 -- setup/common.sh@33 -- # return 0 00:04:09.822 11:42:03 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.822 11:42:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:09.822 nr_hugepages=1025 00:04:09.822 11:42:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.822 resv_hugepages=0 00:04:09.822 11:42:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.822 surplus_hugepages=0 00:04:09.822 11:42:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.822 anon_hugepages=0 00:04:09.822 11:42:03 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:09.822 11:42:03 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:09.822 11:42:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.822 11:42:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.822 11:42:03 -- setup/common.sh@18 -- # local node= 00:04:09.822 11:42:03 -- setup/common.sh@19 -- # local var val 00:04:09.822 11:42:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.822 11:42:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.822 11:42:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.822 11:42:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.822 11:42:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.822 11:42:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109373568 kB' 'MemAvailable: 112714668 kB' 'Buffers: 4132 kB' 'Cached: 10213096 kB' 'SwapCached: 0 kB' 'Active: 7329632 kB' 'Inactive: 3525960 kB' 'Active(anon): 6839036 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641656 kB' 'Mapped: 205416 kB' 'Shmem: 6200672 kB' 'KReclaimable: 297748 kB' 'Slab: 1142456 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 844708 kB' 'KernelStack: 27696 kB' 'PageTables: 9364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8431088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236032 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.822 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.823 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # continue 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.824 11:42:03 -- setup/common.sh@33 -- # echo 1025 00:04:09.824 11:42:03 -- setup/common.sh@33 -- # return 0 00:04:09.824 11:42:03 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:09.824 11:42:03 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.824 11:42:03 -- setup/hugepages.sh@27 -- # local node 00:04:09.824 11:42:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.824 11:42:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.824 11:42:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.824 11:42:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:09.824 11:42:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.824 11:42:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.824 11:42:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.824 11:42:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.087 11:42:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.087 11:42:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.087 11:42:03 -- setup/common.sh@18 -- # local node=0 00:04:10.087 11:42:03 -- setup/common.sh@19 -- # local var val 00:04:10.087 11:42:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.087 11:42:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.087 11:42:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.087 11:42:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.087 11:42:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.087 11:42:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 11:42:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53867320 kB' 'MemUsed: 11791688 kB' 'SwapCached: 0 kB' 'Active: 5143924 kB' 'Inactive: 3325564 kB' 'Active(anon): 4808468 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8246420 kB' 'Mapped: 117260 kB' 'AnonPages: 226764 kB' 'Shmem: 4585400 kB' 'KernelStack: 14456 kB' 'PageTables: 5288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169508 kB' 'Slab: 639124 kB' 'SReclaimable: 169508 kB' 'SUnreclaim: 469616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@33 -- # echo 0 00:04:10.088 11:42:03 -- setup/common.sh@33 -- # return 0 00:04:10.088 11:42:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.088 11:42:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.088 11:42:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.088 11:42:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.088 11:42:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.088 11:42:03 -- setup/common.sh@18 -- # local node=1 00:04:10.088 11:42:03 -- setup/common.sh@19 -- # local var val 00:04:10.088 11:42:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.088 11:42:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.088 11:42:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.088 11:42:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.088 11:42:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.088 11:42:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 55504572 kB' 'MemUsed: 5175284 kB' 'SwapCached: 0 kB' 'Active: 2186908 kB' 'Inactive: 200396 kB' 'Active(anon): 2031768 kB' 'Inactive(anon): 0 kB' 'Active(file): 155140 kB' 'Inactive(file): 200396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1970824 kB' 'Mapped: 88148 kB' 'AnonPages: 416664 kB' 'Shmem: 1615288 kB' 'KernelStack: 13096 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128240 kB' 'Slab: 503204 kB' 'SReclaimable: 128240 kB' 'SUnreclaim: 374964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # continue 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.089 11:42:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.089 11:42:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.089 11:42:03 -- setup/common.sh@33 -- # echo 0 00:04:10.089 11:42:03 -- setup/common.sh@33 -- # return 0 00:04:10.089 11:42:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.089 11:42:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.089 11:42:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.089 11:42:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.089 11:42:03 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:10.089 node0=512 expecting 513 00:04:10.089 11:42:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.089 11:42:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.089 11:42:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.089 11:42:03 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:10.089 node1=513 expecting 512 00:04:10.089 11:42:03 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:10.089 00:04:10.089 real 0m3.691s 00:04:10.089 user 0m1.470s 00:04:10.089 sys 0m2.283s 00:04:10.089 11:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.089 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:04:10.090 ************************************ 00:04:10.090 END TEST odd_alloc 00:04:10.090 ************************************ 00:04:10.090 11:42:03 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:10.090 11:42:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.090 11:42:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.090 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:04:10.090 ************************************ 00:04:10.090 START TEST custom_alloc 00:04:10.090 ************************************ 00:04:10.090 11:42:03 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:10.090 11:42:03 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:10.090 11:42:03 -- setup/hugepages.sh@169 -- # local node 00:04:10.090 11:42:03 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:10.090 11:42:03 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:10.090 11:42:03 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:10.090 11:42:03 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:10.090 11:42:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.090 11:42:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.090 11:42:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.090 11:42:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.090 11:42:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.090 11:42:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.090 11:42:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.090 11:42:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.090 11:42:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.090 11:42:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:10.090 11:42:03 -- setup/hugepages.sh@83 -- # : 256 00:04:10.090 11:42:03 -- setup/hugepages.sh@84 -- # : 1 00:04:10.090 11:42:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:10.090 11:42:03 -- setup/hugepages.sh@83 -- # : 0 00:04:10.090 11:42:03 -- setup/hugepages.sh@84 -- # : 0 00:04:10.090 11:42:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:10.090 11:42:03 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:10.090 11:42:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.090 11:42:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.090 11:42:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.090 11:42:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.090 11:42:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.090 11:42:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.090 11:42:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.090 11:42:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.090 11:42:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.090 11:42:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.090 11:42:03 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:10.090 11:42:03 -- setup/hugepages.sh@78 -- # return 0 00:04:10.090 11:42:03 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:10.090 11:42:03 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:10.090 11:42:03 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.090 11:42:03 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:10.090 11:42:03 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.090 11:42:03 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:10.090 11:42:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.090 11:42:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.090 11:42:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.090 11:42:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.090 11:42:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.090 11:42:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.090 11:42:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:10.090 11:42:03 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.090 11:42:03 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:10.090 11:42:03 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.090 11:42:03 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:10.090 11:42:03 -- setup/hugepages.sh@78 -- # return 0 00:04:10.090 11:42:03 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:10.090 11:42:03 -- setup/hugepages.sh@187 -- # setup output 00:04:10.090 11:42:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.090 11:42:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.391 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:13.391 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:13.391 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:13.656 11:42:07 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:13.656 11:42:07 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:13.656 11:42:07 -- setup/hugepages.sh@89 -- # local node 00:04:13.656 11:42:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.656 11:42:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.656 11:42:07 -- setup/hugepages.sh@92 -- # local surp 00:04:13.656 11:42:07 -- setup/hugepages.sh@93 -- # local resv 00:04:13.656 11:42:07 -- setup/hugepages.sh@94 -- # local anon 00:04:13.656 11:42:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.656 11:42:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.656 11:42:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.656 11:42:07 -- setup/common.sh@18 -- # local node= 00:04:13.656 11:42:07 -- setup/common.sh@19 -- # local var val 00:04:13.656 11:42:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.656 11:42:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.656 11:42:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.656 11:42:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.656 11:42:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.656 11:42:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.656 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.656 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 108351748 kB' 'MemAvailable: 111692848 kB' 'Buffers: 4132 kB' 'Cached: 10213216 kB' 'SwapCached: 0 kB' 'Active: 7330808 kB' 'Inactive: 3525960 kB' 'Active(anon): 6840212 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642248 kB' 'Mapped: 205544 kB' 'Shmem: 6200792 kB' 'KReclaimable: 297748 kB' 'Slab: 1141964 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 844216 kB' 'KernelStack: 27712 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8431996 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236032 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.657 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.657 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.658 11:42:07 -- setup/common.sh@33 -- # echo 0 00:04:13.658 11:42:07 -- setup/common.sh@33 -- # return 0 00:04:13.658 11:42:07 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.658 11:42:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.658 11:42:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.658 11:42:07 -- setup/common.sh@18 -- # local node= 00:04:13.658 11:42:07 -- setup/common.sh@19 -- # local var val 00:04:13.658 11:42:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.658 11:42:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.658 11:42:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.658 11:42:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.658 11:42:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.658 11:42:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 108350892 kB' 'MemAvailable: 111691992 kB' 'Buffers: 4132 kB' 'Cached: 10213220 kB' 'SwapCached: 0 kB' 'Active: 7331068 kB' 'Inactive: 3525960 kB' 'Active(anon): 6840472 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642532 kB' 'Mapped: 205544 kB' 'Shmem: 6200796 kB' 'KReclaimable: 297748 kB' 'Slab: 1141956 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 844208 kB' 'KernelStack: 27712 kB' 'PageTables: 9628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8432008 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236112 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.658 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.658 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.659 11:42:07 -- setup/common.sh@33 -- # echo 0 00:04:13.659 11:42:07 -- setup/common.sh@33 -- # return 0 00:04:13.659 11:42:07 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.659 11:42:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.659 11:42:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.659 11:42:07 -- setup/common.sh@18 -- # local node= 00:04:13.659 11:42:07 -- setup/common.sh@19 -- # local var val 00:04:13.659 11:42:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.659 11:42:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.659 11:42:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.659 11:42:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.659 11:42:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.659 11:42:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 108351524 kB' 'MemAvailable: 111692624 kB' 'Buffers: 4132 kB' 'Cached: 10213232 kB' 'SwapCached: 0 kB' 'Active: 7329992 kB' 'Inactive: 3525960 kB' 'Active(anon): 6839396 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641876 kB' 'Mapped: 205464 kB' 'Shmem: 6200808 kB' 'KReclaimable: 297748 kB' 'Slab: 1141940 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 844192 kB' 'KernelStack: 27632 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8427092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235936 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.659 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.659 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.660 11:42:07 -- setup/common.sh@33 -- # echo 0 00:04:13.660 11:42:07 -- setup/common.sh@33 -- # return 0 00:04:13.660 11:42:07 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.660 11:42:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:13.660 nr_hugepages=1536 00:04:13.660 11:42:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.660 resv_hugepages=0 00:04:13.660 11:42:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.660 surplus_hugepages=0 00:04:13.660 11:42:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.660 anon_hugepages=0 00:04:13.660 11:42:07 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.660 11:42:07 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:13.661 11:42:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.661 11:42:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.661 11:42:07 -- setup/common.sh@18 -- # local node= 00:04:13.661 11:42:07 -- setup/common.sh@19 -- # local var val 00:04:13.661 11:42:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.661 11:42:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.661 11:42:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.661 11:42:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.661 11:42:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.661 11:42:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.661 11:42:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 108352656 kB' 'MemAvailable: 111693756 kB' 'Buffers: 4132 kB' 'Cached: 10213248 kB' 'SwapCached: 0 kB' 'Active: 7329424 kB' 'Inactive: 3525960 kB' 'Active(anon): 6838828 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641420 kB' 'Mapped: 205464 kB' 'Shmem: 6200824 kB' 'KReclaimable: 297748 kB' 'Slab: 1142076 kB' 'SReclaimable: 297748 kB' 'SUnreclaim: 844328 kB' 'KernelStack: 27600 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8427104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235920 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.661 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.662 11:42:07 -- setup/common.sh@33 -- # echo 1536 00:04:13.662 11:42:07 -- setup/common.sh@33 -- # return 0 00:04:13.662 11:42:07 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.662 11:42:07 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.662 11:42:07 -- setup/hugepages.sh@27 -- # local node 00:04:13.662 11:42:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.662 11:42:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.662 11:42:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.662 11:42:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.662 11:42:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.662 11:42:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.662 11:42:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.662 11:42:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.662 11:42:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.662 11:42:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.662 11:42:07 -- setup/common.sh@18 -- # local node=0 00:04:13.662 11:42:07 -- setup/common.sh@19 -- # local var val 00:04:13.662 11:42:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.662 11:42:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.662 11:42:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.662 11:42:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.662 11:42:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.662 11:42:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53879920 kB' 'MemUsed: 11779088 kB' 'SwapCached: 0 kB' 'Active: 5142712 kB' 'Inactive: 3325564 kB' 'Active(anon): 4807256 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8246484 kB' 'Mapped: 117304 kB' 'AnonPages: 224992 kB' 'Shmem: 4585464 kB' 'KernelStack: 14504 kB' 'PageTables: 5440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169508 kB' 'Slab: 638980 kB' 'SReclaimable: 169508 kB' 'SUnreclaim: 469472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.662 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@33 -- # echo 0 00:04:13.663 11:42:07 -- setup/common.sh@33 -- # return 0 00:04:13.663 11:42:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.663 11:42:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.663 11:42:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.663 11:42:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.663 11:42:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.663 11:42:07 -- setup/common.sh@18 -- # local node=1 00:04:13.663 11:42:07 -- setup/common.sh@19 -- # local var val 00:04:13.663 11:42:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.663 11:42:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.663 11:42:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.663 11:42:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.663 11:42:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.663 11:42:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 54472232 kB' 'MemUsed: 6207624 kB' 'SwapCached: 0 kB' 'Active: 2186752 kB' 'Inactive: 200396 kB' 'Active(anon): 2031612 kB' 'Inactive(anon): 0 kB' 'Active(file): 155140 kB' 'Inactive(file): 200396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1970920 kB' 'Mapped: 88160 kB' 'AnonPages: 416440 kB' 'Shmem: 1615384 kB' 'KernelStack: 13096 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128240 kB' 'Slab: 503096 kB' 'SReclaimable: 128240 kB' 'SUnreclaim: 374856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.663 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.663 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # continue 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.664 11:42:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.664 11:42:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.664 11:42:07 -- setup/common.sh@33 -- # echo 0 00:04:13.664 11:42:07 -- setup/common.sh@33 -- # return 0 00:04:13.664 11:42:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.664 11:42:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.664 11:42:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.664 11:42:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.664 11:42:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.664 node0=512 expecting 512 00:04:13.664 11:42:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.664 11:42:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.664 11:42:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.664 11:42:07 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:13.664 node1=1024 expecting 1024 00:04:13.664 11:42:07 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:13.664 00:04:13.664 real 0m3.706s 00:04:13.664 user 0m1.538s 00:04:13.664 sys 0m2.234s 00:04:13.664 11:42:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.664 11:42:07 -- common/autotest_common.sh@10 -- # set +x 00:04:13.664 ************************************ 00:04:13.664 END TEST custom_alloc 00:04:13.664 ************************************ 00:04:13.926 11:42:07 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:13.926 11:42:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.926 11:42:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.926 11:42:07 -- common/autotest_common.sh@10 -- # set +x 00:04:13.926 ************************************ 00:04:13.926 START TEST no_shrink_alloc 00:04:13.926 ************************************ 00:04:13.926 11:42:07 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:13.926 11:42:07 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:13.926 11:42:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.926 11:42:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:13.926 11:42:07 -- setup/hugepages.sh@51 -- # shift 00:04:13.926 11:42:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:13.926 11:42:07 -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.926 11:42:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.926 11:42:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.926 11:42:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:13.926 11:42:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:13.926 11:42:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.926 11:42:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.926 11:42:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.926 11:42:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.926 11:42:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.926 11:42:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:13.926 11:42:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.926 11:42:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:13.926 11:42:07 -- setup/hugepages.sh@73 -- # return 0 00:04:13.926 11:42:07 -- setup/hugepages.sh@198 -- # setup output 00:04:13.926 11:42:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.926 11:42:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.230 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.230 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.230 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.230 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.230 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.230 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:17.231 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.231 11:42:10 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:17.231 11:42:10 -- setup/hugepages.sh@89 -- # local node 00:04:17.231 11:42:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.231 11:42:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.231 11:42:10 -- setup/hugepages.sh@92 -- # local surp 00:04:17.231 11:42:10 -- setup/hugepages.sh@93 -- # local resv 00:04:17.231 11:42:10 -- setup/hugepages.sh@94 -- # local anon 00:04:17.231 11:42:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.231 11:42:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.231 11:42:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.231 11:42:10 -- setup/common.sh@18 -- # local node= 00:04:17.231 11:42:10 -- setup/common.sh@19 -- # local var val 00:04:17.231 11:42:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.231 11:42:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.231 11:42:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.231 11:42:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.231 11:42:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.231 11:42:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109391412 kB' 'MemAvailable: 112732500 kB' 'Buffers: 4132 kB' 'Cached: 10213364 kB' 'SwapCached: 0 kB' 'Active: 7324884 kB' 'Inactive: 3525960 kB' 'Active(anon): 6834288 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636136 kB' 'Mapped: 204604 kB' 'Shmem: 6200940 kB' 'KReclaimable: 297724 kB' 'Slab: 1141464 kB' 'SReclaimable: 297724 kB' 'SUnreclaim: 843740 kB' 'KernelStack: 27584 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8420380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235884 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.231 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.231 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:10 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.232 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.232 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.232 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.232 11:42:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.495 11:42:11 -- setup/common.sh@33 -- # echo 0 00:04:17.495 11:42:11 -- setup/common.sh@33 -- # return 0 00:04:17.495 11:42:11 -- setup/hugepages.sh@97 -- # anon=0 00:04:17.495 11:42:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.495 11:42:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.495 11:42:11 -- setup/common.sh@18 -- # local node= 00:04:17.495 11:42:11 -- setup/common.sh@19 -- # local var val 00:04:17.495 11:42:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.495 11:42:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.495 11:42:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.495 11:42:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.495 11:42:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.495 11:42:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.495 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.495 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109391772 kB' 'MemAvailable: 112732860 kB' 'Buffers: 4132 kB' 'Cached: 10213368 kB' 'SwapCached: 0 kB' 'Active: 7324416 kB' 'Inactive: 3525960 kB' 'Active(anon): 6833820 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636324 kB' 'Mapped: 204528 kB' 'Shmem: 6200944 kB' 'KReclaimable: 297724 kB' 'Slab: 1141448 kB' 'SReclaimable: 297724 kB' 'SUnreclaim: 843724 kB' 'KernelStack: 27616 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8423300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235868 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.496 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.496 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.497 11:42:11 -- setup/common.sh@33 -- # echo 0 00:04:17.497 11:42:11 -- setup/common.sh@33 -- # return 0 00:04:17.497 11:42:11 -- setup/hugepages.sh@99 -- # surp=0 00:04:17.497 11:42:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.497 11:42:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.497 11:42:11 -- setup/common.sh@18 -- # local node= 00:04:17.497 11:42:11 -- setup/common.sh@19 -- # local var val 00:04:17.497 11:42:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.497 11:42:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.497 11:42:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.497 11:42:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.497 11:42:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.497 11:42:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109392824 kB' 'MemAvailable: 112733912 kB' 'Buffers: 4132 kB' 'Cached: 10213380 kB' 'SwapCached: 0 kB' 'Active: 7323892 kB' 'Inactive: 3525960 kB' 'Active(anon): 6833296 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635780 kB' 'Mapped: 204528 kB' 'Shmem: 6200956 kB' 'KReclaimable: 297724 kB' 'Slab: 1141448 kB' 'SReclaimable: 297724 kB' 'SUnreclaim: 843724 kB' 'KernelStack: 27536 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8420412 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235820 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.497 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.497 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.498 11:42:11 -- setup/common.sh@33 -- # echo 0 00:04:17.498 11:42:11 -- setup/common.sh@33 -- # return 0 00:04:17.498 11:42:11 -- setup/hugepages.sh@100 -- # resv=0 00:04:17.498 11:42:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.498 nr_hugepages=1024 00:04:17.498 11:42:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.498 resv_hugepages=0 00:04:17.498 11:42:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.498 surplus_hugepages=0 00:04:17.498 11:42:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.498 anon_hugepages=0 00:04:17.499 11:42:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.499 11:42:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.499 11:42:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.499 11:42:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.499 11:42:11 -- setup/common.sh@18 -- # local node= 00:04:17.499 11:42:11 -- setup/common.sh@19 -- # local var val 00:04:17.499 11:42:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.499 11:42:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.499 11:42:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.499 11:42:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.499 11:42:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.499 11:42:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109391776 kB' 'MemAvailable: 112732860 kB' 'Buffers: 4132 kB' 'Cached: 10213392 kB' 'SwapCached: 0 kB' 'Active: 7323964 kB' 'Inactive: 3525960 kB' 'Active(anon): 6833368 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635760 kB' 'Mapped: 204528 kB' 'Shmem: 6200968 kB' 'KReclaimable: 297716 kB' 'Slab: 1141440 kB' 'SReclaimable: 297716 kB' 'SUnreclaim: 843724 kB' 'KernelStack: 27552 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8420424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.499 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.499 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.500 11:42:11 -- setup/common.sh@33 -- # echo 1024 00:04:17.500 11:42:11 -- setup/common.sh@33 -- # return 0 00:04:17.500 11:42:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.500 11:42:11 -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.500 11:42:11 -- setup/hugepages.sh@27 -- # local node 00:04:17.500 11:42:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.500 11:42:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.500 11:42:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.500 11:42:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:17.500 11:42:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.500 11:42:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.500 11:42:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.500 11:42:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.500 11:42:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.500 11:42:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.500 11:42:11 -- setup/common.sh@18 -- # local node=0 00:04:17.500 11:42:11 -- setup/common.sh@19 -- # local var val 00:04:17.500 11:42:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.500 11:42:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.500 11:42:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.500 11:42:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.500 11:42:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.500 11:42:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52849100 kB' 'MemUsed: 12809908 kB' 'SwapCached: 0 kB' 'Active: 5143756 kB' 'Inactive: 3325564 kB' 'Active(anon): 4808300 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8246572 kB' 'Mapped: 117260 kB' 'AnonPages: 225984 kB' 'Shmem: 4585552 kB' 'KernelStack: 14472 kB' 'PageTables: 5448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169476 kB' 'Slab: 638668 kB' 'SReclaimable: 169476 kB' 'SUnreclaim: 469192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.500 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.500 11:42:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # continue 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.501 11:42:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.501 11:42:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.501 11:42:11 -- setup/common.sh@33 -- # echo 0 00:04:17.501 11:42:11 -- setup/common.sh@33 -- # return 0 00:04:17.501 11:42:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.501 11:42:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.501 11:42:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.501 11:42:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.501 11:42:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.501 node0=1024 expecting 1024 00:04:17.501 11:42:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.501 11:42:11 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:17.501 11:42:11 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:17.501 11:42:11 -- setup/hugepages.sh@202 -- # setup output 00:04:17.501 11:42:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.501 11:42:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:20.807 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:20.807 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:20.807 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:21.072 11:42:14 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:21.072 11:42:14 -- setup/hugepages.sh@89 -- # local node 00:04:21.072 11:42:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.072 11:42:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.072 11:42:14 -- setup/hugepages.sh@92 -- # local surp 00:04:21.072 11:42:14 -- setup/hugepages.sh@93 -- # local resv 00:04:21.072 11:42:14 -- setup/hugepages.sh@94 -- # local anon 00:04:21.072 11:42:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.072 11:42:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.072 11:42:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.072 11:42:14 -- setup/common.sh@18 -- # local node= 00:04:21.072 11:42:14 -- setup/common.sh@19 -- # local var val 00:04:21.072 11:42:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.072 11:42:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.072 11:42:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.072 11:42:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.072 11:42:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.072 11:42:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.072 11:42:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109414220 kB' 'MemAvailable: 112755304 kB' 'Buffers: 4132 kB' 'Cached: 10213492 kB' 'SwapCached: 0 kB' 'Active: 7325020 kB' 'Inactive: 3525960 kB' 'Active(anon): 6834424 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636260 kB' 'Mapped: 204644 kB' 'Shmem: 6201068 kB' 'KReclaimable: 297716 kB' 'Slab: 1141444 kB' 'SReclaimable: 297716 kB' 'SUnreclaim: 843728 kB' 'KernelStack: 27552 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8419612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235916 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.072 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.072 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.073 11:42:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.073 11:42:14 -- setup/common.sh@33 -- # echo 0 00:04:21.073 11:42:14 -- setup/common.sh@33 -- # return 0 00:04:21.073 11:42:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:21.073 11:42:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.073 11:42:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.073 11:42:14 -- setup/common.sh@18 -- # local node= 00:04:21.073 11:42:14 -- setup/common.sh@19 -- # local var val 00:04:21.073 11:42:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.073 11:42:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.073 11:42:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.073 11:42:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.073 11:42:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.073 11:42:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.073 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109417252 kB' 'MemAvailable: 112758336 kB' 'Buffers: 4132 kB' 'Cached: 10213496 kB' 'SwapCached: 0 kB' 'Active: 7325824 kB' 'Inactive: 3525960 kB' 'Active(anon): 6835228 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636928 kB' 'Mapped: 205124 kB' 'Shmem: 6201072 kB' 'KReclaimable: 297716 kB' 'Slab: 1141444 kB' 'SReclaimable: 297716 kB' 'SUnreclaim: 843728 kB' 'KernelStack: 27472 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8421904 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235852 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.074 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.074 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.075 11:42:14 -- setup/common.sh@33 -- # echo 0 00:04:21.075 11:42:14 -- setup/common.sh@33 -- # return 0 00:04:21.075 11:42:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:21.075 11:42:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.075 11:42:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.075 11:42:14 -- setup/common.sh@18 -- # local node= 00:04:21.075 11:42:14 -- setup/common.sh@19 -- # local var val 00:04:21.075 11:42:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.075 11:42:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.075 11:42:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.075 11:42:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.075 11:42:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.075 11:42:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109417112 kB' 'MemAvailable: 112758196 kB' 'Buffers: 4132 kB' 'Cached: 10213508 kB' 'SwapCached: 0 kB' 'Active: 7329424 kB' 'Inactive: 3525960 kB' 'Active(anon): 6838828 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641076 kB' 'Mapped: 205332 kB' 'Shmem: 6201084 kB' 'KReclaimable: 297716 kB' 'Slab: 1141420 kB' 'SReclaimable: 297716 kB' 'SUnreclaim: 843704 kB' 'KernelStack: 27504 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8425760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235856 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.075 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.075 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.076 11:42:14 -- setup/common.sh@33 -- # echo 0 00:04:21.076 11:42:14 -- setup/common.sh@33 -- # return 0 00:04:21.076 11:42:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:21.076 11:42:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.076 nr_hugepages=1024 00:04:21.076 11:42:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.076 resv_hugepages=0 00:04:21.076 11:42:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.076 surplus_hugepages=0 00:04:21.076 11:42:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.076 anon_hugepages=0 00:04:21.076 11:42:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.076 11:42:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.076 11:42:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.076 11:42:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.076 11:42:14 -- setup/common.sh@18 -- # local node= 00:04:21.076 11:42:14 -- setup/common.sh@19 -- # local var val 00:04:21.076 11:42:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.076 11:42:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.076 11:42:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.076 11:42:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.076 11:42:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.076 11:42:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109416860 kB' 'MemAvailable: 112757944 kB' 'Buffers: 4132 kB' 'Cached: 10213520 kB' 'SwapCached: 0 kB' 'Active: 7323912 kB' 'Inactive: 3525960 kB' 'Active(anon): 6833316 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635516 kB' 'Mapped: 204892 kB' 'Shmem: 6201096 kB' 'KReclaimable: 297716 kB' 'Slab: 1141420 kB' 'SReclaimable: 297716 kB' 'SUnreclaim: 843704 kB' 'KernelStack: 27488 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8419656 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4019572 kB' 'DirectMap2M: 44943360 kB' 'DirectMap1G: 87031808 kB' 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.076 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.076 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.077 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.077 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.078 11:42:14 -- setup/common.sh@33 -- # echo 1024 00:04:21.078 11:42:14 -- setup/common.sh@33 -- # return 0 00:04:21.078 11:42:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.078 11:42:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.078 11:42:14 -- setup/hugepages.sh@27 -- # local node 00:04:21.078 11:42:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.078 11:42:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.078 11:42:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.078 11:42:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.078 11:42:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.078 11:42:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.078 11:42:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.078 11:42:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.078 11:42:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.078 11:42:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.078 11:42:14 -- setup/common.sh@18 -- # local node=0 00:04:21.078 11:42:14 -- setup/common.sh@19 -- # local var val 00:04:21.078 11:42:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.078 11:42:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.078 11:42:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.078 11:42:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.078 11:42:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.078 11:42:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52836648 kB' 'MemUsed: 12822360 kB' 'SwapCached: 0 kB' 'Active: 5143728 kB' 'Inactive: 3325564 kB' 'Active(anon): 4808272 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8246692 kB' 'Mapped: 117260 kB' 'AnonPages: 225772 kB' 'Shmem: 4585672 kB' 'KernelStack: 14456 kB' 'PageTables: 5396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 169476 kB' 'Slab: 638524 kB' 'SReclaimable: 169476 kB' 'SUnreclaim: 469048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.078 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.078 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # continue 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.079 11:42:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.079 11:42:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.079 11:42:14 -- setup/common.sh@33 -- # echo 0 00:04:21.079 11:42:14 -- setup/common.sh@33 -- # return 0 00:04:21.079 11:42:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.079 11:42:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.079 11:42:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.079 11:42:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.079 11:42:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.079 node0=1024 expecting 1024 00:04:21.079 11:42:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.079 00:04:21.079 real 0m7.352s 00:04:21.079 user 0m2.897s 00:04:21.079 sys 0m4.582s 00:04:21.079 11:42:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.079 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:21.079 ************************************ 00:04:21.079 END TEST no_shrink_alloc 00:04:21.079 ************************************ 00:04:21.079 11:42:14 -- setup/hugepages.sh@217 -- # clear_hp 00:04:21.079 11:42:14 -- setup/hugepages.sh@37 -- # local node hp 00:04:21.079 11:42:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.079 11:42:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.079 11:42:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.079 11:42:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.079 11:42:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.340 11:42:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.340 11:42:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.340 11:42:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.340 11:42:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.340 11:42:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.340 11:42:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.340 11:42:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.340 00:04:21.340 real 0m26.297s 00:04:21.340 user 0m10.470s 00:04:21.340 sys 0m16.264s 00:04:21.340 11:42:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.340 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:21.340 ************************************ 00:04:21.340 END TEST hugepages 00:04:21.340 ************************************ 00:04:21.340 11:42:14 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:21.340 11:42:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.340 11:42:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.340 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:21.340 ************************************ 00:04:21.340 START TEST driver 00:04:21.340 ************************************ 00:04:21.340 11:42:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:21.340 * Looking for test storage... 00:04:21.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.340 11:42:14 -- setup/driver.sh@68 -- # setup reset 00:04:21.340 11:42:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.340 11:42:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.640 11:42:19 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:26.640 11:42:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.640 11:42:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.640 11:42:19 -- common/autotest_common.sh@10 -- # set +x 00:04:26.640 ************************************ 00:04:26.640 START TEST guess_driver 00:04:26.640 ************************************ 00:04:26.640 11:42:19 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:26.640 11:42:19 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:26.640 11:42:19 -- setup/driver.sh@47 -- # local fail=0 00:04:26.640 11:42:19 -- setup/driver.sh@49 -- # pick_driver 00:04:26.640 11:42:19 -- setup/driver.sh@36 -- # vfio 00:04:26.640 11:42:19 -- setup/driver.sh@21 -- # local iommu_grups 00:04:26.640 11:42:19 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:26.640 11:42:19 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:26.640 11:42:19 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:26.640 11:42:19 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:26.640 11:42:19 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:04:26.640 11:42:19 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:26.640 11:42:19 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:26.640 11:42:19 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:26.640 11:42:19 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:26.640 11:42:19 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:26.640 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:26.640 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:26.640 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:26.640 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:26.640 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:26.640 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:26.640 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:26.640 11:42:19 -- setup/driver.sh@30 -- # return 0 00:04:26.640 11:42:19 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:26.640 11:42:19 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:26.640 11:42:19 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:26.640 11:42:19 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:26.640 Looking for driver=vfio-pci 00:04:26.640 11:42:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.640 11:42:19 -- setup/driver.sh@45 -- # setup output config 00:04:26.640 11:42:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.640 11:42:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.188 11:42:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.188 11:42:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.188 11:42:22 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:29.188 11:42:22 -- setup/driver.sh@65 -- # setup reset 00:04:29.188 11:42:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.188 11:42:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.479 00:04:34.479 real 0m8.188s 00:04:34.479 user 0m2.692s 00:04:34.479 sys 0m4.750s 00:04:34.479 11:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.479 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.479 ************************************ 00:04:34.479 END TEST guess_driver 00:04:34.479 ************************************ 00:04:34.479 00:04:34.479 real 0m12.742s 00:04:34.479 user 0m3.958s 00:04:34.479 sys 0m7.256s 00:04:34.479 11:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.479 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.479 ************************************ 00:04:34.479 END TEST driver 00:04:34.479 ************************************ 00:04:34.479 11:42:27 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:34.479 11:42:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.479 11:42:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.479 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.479 ************************************ 00:04:34.479 START TEST devices 00:04:34.479 ************************************ 00:04:34.479 11:42:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:34.479 * Looking for test storage... 00:04:34.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:34.479 11:42:27 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:34.479 11:42:27 -- setup/devices.sh@192 -- # setup reset 00:04:34.479 11:42:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.479 11:42:27 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.687 11:42:31 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:38.687 11:42:31 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:38.687 11:42:31 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:38.687 11:42:31 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:38.687 11:42:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.687 11:42:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:38.687 11:42:31 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:38.687 11:42:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.687 11:42:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.687 11:42:31 -- setup/devices.sh@196 -- # blocks=() 00:04:38.687 11:42:31 -- setup/devices.sh@196 -- # declare -a blocks 00:04:38.687 11:42:31 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:38.687 11:42:31 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:38.687 11:42:31 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:38.687 11:42:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:38.687 11:42:31 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:38.687 11:42:31 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:38.687 11:42:31 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:38.687 11:42:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:38.687 11:42:31 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:38.687 11:42:31 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:38.687 11:42:31 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:38.687 No valid GPT data, bailing 00:04:38.687 11:42:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.687 11:42:31 -- scripts/common.sh@393 -- # pt= 00:04:38.687 11:42:31 -- scripts/common.sh@394 -- # return 1 00:04:38.687 11:42:31 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:38.687 11:42:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:38.687 11:42:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:38.687 11:42:31 -- setup/common.sh@80 -- # echo 1920383410176 00:04:38.687 11:42:31 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:38.687 11:42:31 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:38.687 11:42:31 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:38.687 11:42:31 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:38.687 11:42:31 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:38.687 11:42:31 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:38.687 11:42:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.687 11:42:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.687 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:38.687 ************************************ 00:04:38.687 START TEST nvme_mount 00:04:38.687 ************************************ 00:04:38.687 11:42:31 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:38.687 11:42:31 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:38.687 11:42:31 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:38.687 11:42:31 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.687 11:42:31 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.687 11:42:31 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:38.687 11:42:31 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.687 11:42:31 -- setup/common.sh@40 -- # local part_no=1 00:04:38.687 11:42:31 -- setup/common.sh@41 -- # local size=1073741824 00:04:38.687 11:42:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.687 11:42:31 -- setup/common.sh@44 -- # parts=() 00:04:38.687 11:42:31 -- setup/common.sh@44 -- # local parts 00:04:38.687 11:42:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.687 11:42:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.687 11:42:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.687 11:42:31 -- setup/common.sh@46 -- # (( part++ )) 00:04:38.687 11:42:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.687 11:42:31 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:38.687 11:42:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.687 11:42:31 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:38.948 Creating new GPT entries in memory. 00:04:38.948 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.948 other utilities. 00:04:38.948 11:42:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.948 11:42:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.948 11:42:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.948 11:42:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.948 11:42:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:39.891 Creating new GPT entries in memory. 00:04:39.891 The operation has completed successfully. 00:04:39.891 11:42:33 -- setup/common.sh@57 -- # (( part++ )) 00:04:39.891 11:42:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.891 11:42:33 -- setup/common.sh@62 -- # wait 1710312 00:04:40.152 11:42:33 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.152 11:42:33 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:40.152 11:42:33 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.152 11:42:33 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:40.152 11:42:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:40.152 11:42:33 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.152 11:42:33 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.152 11:42:33 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:40.152 11:42:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:40.152 11:42:33 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.152 11:42:33 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.152 11:42:33 -- setup/devices.sh@53 -- # local found=0 00:04:40.152 11:42:33 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.152 11:42:33 -- setup/devices.sh@56 -- # : 00:04:40.152 11:42:33 -- setup/devices.sh@59 -- # local pci status 00:04:40.152 11:42:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.152 11:42:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:40.152 11:42:33 -- setup/devices.sh@47 -- # setup output config 00:04:40.152 11:42:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.152 11:42:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:43.456 11:42:36 -- setup/devices.sh@63 -- # found=1 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.456 11:42:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.456 11:42:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.456 11:42:37 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:43.456 11:42:37 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.456 11:42:37 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.456 11:42:37 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.456 11:42:37 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:43.456 11:42:37 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.456 11:42:37 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.456 11:42:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.456 11:42:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.456 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.456 11:42:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.456 11:42:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.716 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:43.716 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:43.716 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:43.716 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:43.716 11:42:37 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:43.716 11:42:37 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:43.716 11:42:37 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.716 11:42:37 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:43.716 11:42:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:43.716 11:42:37 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.716 11:42:37 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.716 11:42:37 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:43.716 11:42:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:43.716 11:42:37 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.716 11:42:37 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.716 11:42:37 -- setup/devices.sh@53 -- # local found=0 00:04:43.716 11:42:37 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.716 11:42:37 -- setup/devices.sh@56 -- # : 00:04:43.716 11:42:37 -- setup/devices.sh@59 -- # local pci status 00:04:43.716 11:42:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.716 11:42:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:43.716 11:42:37 -- setup/devices.sh@47 -- # setup output config 00:04:43.716 11:42:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.716 11:42:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:47.101 11:42:40 -- setup/devices.sh@63 -- # found=1 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.101 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.101 11:42:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.101 11:42:40 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:47.101 11:42:40 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.362 11:42:40 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.362 11:42:40 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.362 11:42:40 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.362 11:42:40 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:47.362 11:42:40 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:47.362 11:42:40 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:47.363 11:42:40 -- setup/devices.sh@50 -- # local mount_point= 00:04:47.363 11:42:40 -- setup/devices.sh@51 -- # local test_file= 00:04:47.363 11:42:40 -- setup/devices.sh@53 -- # local found=0 00:04:47.363 11:42:40 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.363 11:42:40 -- setup/devices.sh@59 -- # local pci status 00:04:47.363 11:42:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 11:42:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:47.363 11:42:40 -- setup/devices.sh@47 -- # setup output config 00:04:47.363 11:42:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.363 11:42:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:50.667 11:42:44 -- setup/devices.sh@63 -- # found=1 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.667 11:42:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.667 11:42:44 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.667 11:42:44 -- setup/devices.sh@68 -- # return 0 00:04:50.667 11:42:44 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:50.667 11:42:44 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.667 11:42:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.667 11:42:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.667 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.667 00:04:50.667 real 0m12.803s 00:04:50.667 user 0m3.830s 00:04:50.667 sys 0m6.884s 00:04:50.667 11:42:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.667 11:42:44 -- common/autotest_common.sh@10 -- # set +x 00:04:50.667 ************************************ 00:04:50.667 END TEST nvme_mount 00:04:50.667 ************************************ 00:04:50.928 11:42:44 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:50.928 11:42:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.928 11:42:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.928 11:42:44 -- common/autotest_common.sh@10 -- # set +x 00:04:50.928 ************************************ 00:04:50.928 START TEST dm_mount 00:04:50.928 ************************************ 00:04:50.928 11:42:44 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:50.928 11:42:44 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:50.928 11:42:44 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:50.928 11:42:44 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:50.928 11:42:44 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:50.928 11:42:44 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.928 11:42:44 -- setup/common.sh@40 -- # local part_no=2 00:04:50.928 11:42:44 -- setup/common.sh@41 -- # local size=1073741824 00:04:50.928 11:42:44 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.928 11:42:44 -- setup/common.sh@44 -- # parts=() 00:04:50.928 11:42:44 -- setup/common.sh@44 -- # local parts 00:04:50.928 11:42:44 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.928 11:42:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.928 11:42:44 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.928 11:42:44 -- setup/common.sh@46 -- # (( part++ )) 00:04:50.928 11:42:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.928 11:42:44 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.928 11:42:44 -- setup/common.sh@46 -- # (( part++ )) 00:04:50.928 11:42:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.928 11:42:44 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:50.928 11:42:44 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.928 11:42:44 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:51.871 Creating new GPT entries in memory. 00:04:51.871 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:51.871 other utilities. 00:04:51.871 11:42:45 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:51.871 11:42:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.871 11:42:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:51.871 11:42:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:51.871 11:42:45 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:52.814 Creating new GPT entries in memory. 00:04:52.814 The operation has completed successfully. 00:04:52.814 11:42:46 -- setup/common.sh@57 -- # (( part++ )) 00:04:52.814 11:42:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.814 11:42:46 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.814 11:42:46 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.814 11:42:46 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:53.777 The operation has completed successfully. 00:04:53.777 11:42:47 -- setup/common.sh@57 -- # (( part++ )) 00:04:53.777 11:42:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.777 11:42:47 -- setup/common.sh@62 -- # wait 1715347 00:04:54.038 11:42:47 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:54.038 11:42:47 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.038 11:42:47 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:54.038 11:42:47 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:54.038 11:42:47 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:54.038 11:42:47 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:54.038 11:42:47 -- setup/devices.sh@161 -- # break 00:04:54.038 11:42:47 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:54.038 11:42:47 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:54.038 11:42:47 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:54.038 11:42:47 -- setup/devices.sh@166 -- # dm=dm-1 00:04:54.038 11:42:47 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:54.038 11:42:47 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:54.038 11:42:47 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.038 11:42:47 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:54.038 11:42:47 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.038 11:42:47 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:54.038 11:42:47 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:54.038 11:42:47 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.038 11:42:47 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:54.038 11:42:47 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:54.038 11:42:47 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:54.038 11:42:47 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.038 11:42:47 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:54.038 11:42:47 -- setup/devices.sh@53 -- # local found=0 00:04:54.038 11:42:47 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:54.038 11:42:47 -- setup/devices.sh@56 -- # : 00:04:54.038 11:42:47 -- setup/devices.sh@59 -- # local pci status 00:04:54.038 11:42:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.038 11:42:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:54.038 11:42:47 -- setup/devices.sh@47 -- # setup output config 00:04:54.038 11:42:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.038 11:42:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:57.346 11:42:50 -- setup/devices.sh@63 -- # found=1 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.346 11:42:50 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:57.346 11:42:50 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.346 11:42:50 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:57.346 11:42:50 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:57.346 11:42:50 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.346 11:42:50 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:57.346 11:42:50 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:57.346 11:42:50 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:57.346 11:42:50 -- setup/devices.sh@50 -- # local mount_point= 00:04:57.346 11:42:50 -- setup/devices.sh@51 -- # local test_file= 00:04:57.346 11:42:50 -- setup/devices.sh@53 -- # local found=0 00:04:57.346 11:42:50 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:57.346 11:42:50 -- setup/devices.sh@59 -- # local pci status 00:04:57.346 11:42:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.346 11:42:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:57.346 11:42:50 -- setup/devices.sh@47 -- # setup output config 00:04:57.346 11:42:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.346 11:42:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:05:00.650 11:42:54 -- setup/devices.sh@63 -- # found=1 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.650 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.650 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.651 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.651 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.651 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.651 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.651 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.651 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.651 11:42:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.651 11:42:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.651 11:42:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.651 11:42:54 -- setup/devices.sh@68 -- # return 0 00:05:00.651 11:42:54 -- setup/devices.sh@187 -- # cleanup_dm 00:05:00.651 11:42:54 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.651 11:42:54 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.651 11:42:54 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:00.651 11:42:54 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:00.651 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.651 11:42:54 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:00.651 00:05:00.651 real 0m9.797s 00:05:00.651 user 0m2.447s 00:05:00.651 sys 0m4.321s 00:05:00.651 11:42:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.651 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:00.651 ************************************ 00:05:00.651 END TEST dm_mount 00:05:00.651 ************************************ 00:05:00.651 11:42:54 -- setup/devices.sh@1 -- # cleanup 00:05:00.651 11:42:54 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:00.651 11:42:54 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.651 11:42:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:00.651 11:42:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.651 11:42:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.912 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:00.912 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:00.912 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:00.912 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:00.912 11:42:54 -- setup/devices.sh@12 -- # cleanup_dm 00:05:00.912 11:42:54 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.912 11:42:54 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.912 11:42:54 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.912 11:42:54 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.912 11:42:54 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.912 11:42:54 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:00.912 00:05:00.912 real 0m26.912s 00:05:00.912 user 0m7.738s 00:05:00.912 sys 0m13.943s 00:05:00.912 11:42:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.912 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:00.912 ************************************ 00:05:00.912 END TEST devices 00:05:00.912 ************************************ 00:05:00.912 00:05:00.912 real 1m30.878s 00:05:00.912 user 0m30.350s 00:05:00.912 sys 0m52.085s 00:05:00.912 11:42:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.912 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:00.912 ************************************ 00:05:00.912 END TEST setup.sh 00:05:00.912 ************************************ 00:05:00.912 11:42:54 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:04.218 Hugepages 00:05:04.218 node hugesize free / total 00:05:04.218 node0 1048576kB 0 / 0 00:05:04.218 node0 2048kB 2048 / 2048 00:05:04.218 node1 1048576kB 0 / 0 00:05:04.218 node1 2048kB 0 / 0 00:05:04.218 00:05:04.218 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.218 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:04.218 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:04.218 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:04.218 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:04.218 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:04.218 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:04.218 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:04.218 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:04.479 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:04.479 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:04.479 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:04.479 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:04.479 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:04.479 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:04.479 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:04.479 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:04.479 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:04.479 11:42:58 -- spdk/autotest.sh@141 -- # uname -s 00:05:04.479 11:42:58 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:04.479 11:42:58 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:04.479 11:42:58 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.784 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.784 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:09.699 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:09.699 11:43:03 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:11.087 11:43:04 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:11.087 11:43:04 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:11.087 11:43:04 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:11.087 11:43:04 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:11.087 11:43:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.087 11:43:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.087 11:43:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.087 11:43:04 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.087 11:43:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.087 11:43:04 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:11.087 11:43:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:11.087 11:43:04 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.392 Waiting for block devices as requested 00:05:14.392 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:14.392 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:14.392 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:14.652 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:14.652 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:14.652 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:14.913 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:14.913 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:14.913 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:15.174 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:15.174 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:15.174 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:15.435 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:15.435 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:15.435 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:15.435 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:15.696 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:15.696 11:43:09 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:15.696 11:43:09 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:15.696 11:43:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:15.696 11:43:09 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:15.696 11:43:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:15.696 11:43:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:15.696 11:43:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:15.696 11:43:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:15.696 11:43:09 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:15.696 11:43:09 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:15.696 11:43:09 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:15.696 11:43:09 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:15.696 11:43:09 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:15.696 11:43:09 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:05:15.696 11:43:09 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:15.696 11:43:09 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:15.696 11:43:09 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:15.696 11:43:09 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:15.696 11:43:09 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:15.696 11:43:09 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:15.696 11:43:09 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:15.696 11:43:09 -- common/autotest_common.sh@1542 -- # continue 00:05:15.696 11:43:09 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:15.696 11:43:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:15.696 11:43:09 -- common/autotest_common.sh@10 -- # set +x 00:05:15.696 11:43:09 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:15.696 11:43:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.696 11:43:09 -- common/autotest_common.sh@10 -- # set +x 00:05:15.696 11:43:09 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:19.001 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:19.001 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:19.262 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:19.262 11:43:13 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:19.262 11:43:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:19.262 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 11:43:13 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:19.523 11:43:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:19.523 11:43:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:19.523 11:43:13 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:19.523 11:43:13 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:19.523 11:43:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:19.523 11:43:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:19.523 11:43:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:19.523 11:43:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.523 11:43:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:19.523 11:43:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:19.523 11:43:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:19.523 11:43:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:19.523 11:43:13 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:19.523 11:43:13 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:19.523 11:43:13 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:05:19.523 11:43:13 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:19.523 11:43:13 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:19.523 11:43:13 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:19.523 11:43:13 -- common/autotest_common.sh@1578 -- # return 0 00:05:19.523 11:43:13 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:19.523 11:43:13 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:19.523 11:43:13 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:19.523 11:43:13 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:19.523 11:43:13 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:19.523 11:43:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:19.523 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 11:43:13 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:19.523 11:43:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.523 11:43:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.523 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 ************************************ 00:05:19.523 START TEST env 00:05:19.523 ************************************ 00:05:19.523 11:43:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:19.523 * Looking for test storage... 00:05:19.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:19.523 11:43:13 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.523 11:43:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.523 11:43:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.523 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 ************************************ 00:05:19.523 START TEST env_memory 00:05:19.523 ************************************ 00:05:19.523 11:43:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.523 00:05:19.523 00:05:19.523 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.523 http://cunit.sourceforge.net/ 00:05:19.523 00:05:19.523 00:05:19.523 Suite: memory 00:05:19.785 Test: alloc and free memory map ...[2024-06-10 11:43:13.303484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:19.785 passed 00:05:19.785 Test: mem map translation ...[2024-06-10 11:43:13.329115] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:19.785 [2024-06-10 11:43:13.329151] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:19.785 [2024-06-10 11:43:13.329199] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:19.785 [2024-06-10 11:43:13.329207] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:19.785 passed 00:05:19.785 Test: mem map registration ...[2024-06-10 11:43:13.384346] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:19.785 [2024-06-10 11:43:13.384362] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:19.785 passed 00:05:19.785 Test: mem map adjacent registrations ...passed 00:05:19.785 00:05:19.785 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.785 suites 1 1 n/a 0 0 00:05:19.785 tests 4 4 4 0 0 00:05:19.785 asserts 152 152 152 0 n/a 00:05:19.785 00:05:19.785 Elapsed time = 0.194 seconds 00:05:19.785 00:05:19.785 real 0m0.207s 00:05:19.785 user 0m0.198s 00:05:19.785 sys 0m0.009s 00:05:19.785 11:43:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.785 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.785 ************************************ 00:05:19.785 END TEST env_memory 00:05:19.785 ************************************ 00:05:19.785 11:43:13 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.785 11:43:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.785 11:43:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.785 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.785 ************************************ 00:05:19.785 START TEST env_vtophys 00:05:19.785 ************************************ 00:05:19.785 11:43:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.785 EAL: lib.eal log level changed from notice to debug 00:05:19.785 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.785 EAL: Detected lcore 1 as core 1 on socket 0 00:05:19.785 EAL: Detected lcore 2 as core 2 on socket 0 00:05:19.785 EAL: Detected lcore 3 as core 3 on socket 0 00:05:19.785 EAL: Detected lcore 4 as core 4 on socket 0 00:05:19.785 EAL: Detected lcore 5 as core 5 on socket 0 00:05:19.785 EAL: Detected lcore 6 as core 6 on socket 0 00:05:19.785 EAL: Detected lcore 7 as core 7 on socket 0 00:05:19.785 EAL: Detected lcore 8 as core 8 on socket 0 00:05:19.785 EAL: Detected lcore 9 as core 9 on socket 0 00:05:19.785 EAL: Detected lcore 10 as core 10 on socket 0 00:05:19.785 EAL: Detected lcore 11 as core 11 on socket 0 00:05:19.785 EAL: Detected lcore 12 as core 12 on socket 0 00:05:19.785 EAL: Detected lcore 13 as core 13 on socket 0 00:05:19.785 EAL: Detected lcore 14 as core 14 on socket 0 00:05:19.785 EAL: Detected lcore 15 as core 15 on socket 0 00:05:19.785 EAL: Detected lcore 16 as core 16 on socket 0 00:05:19.785 EAL: Detected lcore 17 as core 17 on socket 0 00:05:19.786 EAL: Detected lcore 18 as core 18 on socket 0 00:05:19.786 EAL: Detected lcore 19 as core 19 on socket 0 00:05:19.786 EAL: Detected lcore 20 as core 20 on socket 0 00:05:19.786 EAL: Detected lcore 21 as core 21 on socket 0 00:05:19.786 EAL: Detected lcore 22 as core 22 on socket 0 00:05:19.786 EAL: Detected lcore 23 as core 23 on socket 0 00:05:19.786 EAL: Detected lcore 24 as core 24 on socket 0 00:05:19.786 EAL: Detected lcore 25 as core 25 on socket 0 00:05:19.786 EAL: Detected lcore 26 as core 26 on socket 0 00:05:19.786 EAL: Detected lcore 27 as core 27 on socket 0 00:05:19.786 EAL: Detected lcore 28 as core 28 on socket 0 00:05:19.786 EAL: Detected lcore 29 as core 29 on socket 0 00:05:19.786 EAL: Detected lcore 30 as core 30 on socket 0 00:05:19.786 EAL: Detected lcore 31 as core 31 on socket 0 00:05:19.786 EAL: Detected lcore 32 as core 32 on socket 0 00:05:19.786 EAL: Detected lcore 33 as core 33 on socket 0 00:05:19.786 EAL: Detected lcore 34 as core 34 on socket 0 00:05:19.786 EAL: Detected lcore 35 as core 35 on socket 0 00:05:19.786 EAL: Detected lcore 36 as core 0 on socket 1 00:05:19.786 EAL: Detected lcore 37 as core 1 on socket 1 00:05:19.786 EAL: Detected lcore 38 as core 2 on socket 1 00:05:19.786 EAL: Detected lcore 39 as core 3 on socket 1 00:05:19.786 EAL: Detected lcore 40 as core 4 on socket 1 00:05:19.786 EAL: Detected lcore 41 as core 5 on socket 1 00:05:19.786 EAL: Detected lcore 42 as core 6 on socket 1 00:05:19.786 EAL: Detected lcore 43 as core 7 on socket 1 00:05:19.786 EAL: Detected lcore 44 as core 8 on socket 1 00:05:19.786 EAL: Detected lcore 45 as core 9 on socket 1 00:05:19.786 EAL: Detected lcore 46 as core 10 on socket 1 00:05:19.786 EAL: Detected lcore 47 as core 11 on socket 1 00:05:19.786 EAL: Detected lcore 48 as core 12 on socket 1 00:05:19.786 EAL: Detected lcore 49 as core 13 on socket 1 00:05:19.786 EAL: Detected lcore 50 as core 14 on socket 1 00:05:19.786 EAL: Detected lcore 51 as core 15 on socket 1 00:05:19.786 EAL: Detected lcore 52 as core 16 on socket 1 00:05:19.786 EAL: Detected lcore 53 as core 17 on socket 1 00:05:19.786 EAL: Detected lcore 54 as core 18 on socket 1 00:05:19.786 EAL: Detected lcore 55 as core 19 on socket 1 00:05:19.786 EAL: Detected lcore 56 as core 20 on socket 1 00:05:19.786 EAL: Detected lcore 57 as core 21 on socket 1 00:05:19.786 EAL: Detected lcore 58 as core 22 on socket 1 00:05:19.786 EAL: Detected lcore 59 as core 23 on socket 1 00:05:19.786 EAL: Detected lcore 60 as core 24 on socket 1 00:05:19.786 EAL: Detected lcore 61 as core 25 on socket 1 00:05:19.786 EAL: Detected lcore 62 as core 26 on socket 1 00:05:19.786 EAL: Detected lcore 63 as core 27 on socket 1 00:05:19.786 EAL: Detected lcore 64 as core 28 on socket 1 00:05:19.786 EAL: Detected lcore 65 as core 29 on socket 1 00:05:19.786 EAL: Detected lcore 66 as core 30 on socket 1 00:05:19.786 EAL: Detected lcore 67 as core 31 on socket 1 00:05:19.786 EAL: Detected lcore 68 as core 32 on socket 1 00:05:19.786 EAL: Detected lcore 69 as core 33 on socket 1 00:05:19.786 EAL: Detected lcore 70 as core 34 on socket 1 00:05:19.786 EAL: Detected lcore 71 as core 35 on socket 1 00:05:19.786 EAL: Detected lcore 72 as core 0 on socket 0 00:05:19.786 EAL: Detected lcore 73 as core 1 on socket 0 00:05:19.786 EAL: Detected lcore 74 as core 2 on socket 0 00:05:19.786 EAL: Detected lcore 75 as core 3 on socket 0 00:05:19.786 EAL: Detected lcore 76 as core 4 on socket 0 00:05:19.786 EAL: Detected lcore 77 as core 5 on socket 0 00:05:19.786 EAL: Detected lcore 78 as core 6 on socket 0 00:05:19.786 EAL: Detected lcore 79 as core 7 on socket 0 00:05:19.786 EAL: Detected lcore 80 as core 8 on socket 0 00:05:19.786 EAL: Detected lcore 81 as core 9 on socket 0 00:05:19.786 EAL: Detected lcore 82 as core 10 on socket 0 00:05:19.786 EAL: Detected lcore 83 as core 11 on socket 0 00:05:19.786 EAL: Detected lcore 84 as core 12 on socket 0 00:05:19.786 EAL: Detected lcore 85 as core 13 on socket 0 00:05:19.786 EAL: Detected lcore 86 as core 14 on socket 0 00:05:19.786 EAL: Detected lcore 87 as core 15 on socket 0 00:05:19.786 EAL: Detected lcore 88 as core 16 on socket 0 00:05:19.786 EAL: Detected lcore 89 as core 17 on socket 0 00:05:19.786 EAL: Detected lcore 90 as core 18 on socket 0 00:05:19.786 EAL: Detected lcore 91 as core 19 on socket 0 00:05:19.786 EAL: Detected lcore 92 as core 20 on socket 0 00:05:19.786 EAL: Detected lcore 93 as core 21 on socket 0 00:05:19.786 EAL: Detected lcore 94 as core 22 on socket 0 00:05:19.786 EAL: Detected lcore 95 as core 23 on socket 0 00:05:19.786 EAL: Detected lcore 96 as core 24 on socket 0 00:05:19.786 EAL: Detected lcore 97 as core 25 on socket 0 00:05:19.786 EAL: Detected lcore 98 as core 26 on socket 0 00:05:19.786 EAL: Detected lcore 99 as core 27 on socket 0 00:05:19.786 EAL: Detected lcore 100 as core 28 on socket 0 00:05:19.786 EAL: Detected lcore 101 as core 29 on socket 0 00:05:19.786 EAL: Detected lcore 102 as core 30 on socket 0 00:05:19.786 EAL: Detected lcore 103 as core 31 on socket 0 00:05:19.786 EAL: Detected lcore 104 as core 32 on socket 0 00:05:19.786 EAL: Detected lcore 105 as core 33 on socket 0 00:05:19.786 EAL: Detected lcore 106 as core 34 on socket 0 00:05:19.786 EAL: Detected lcore 107 as core 35 on socket 0 00:05:19.786 EAL: Detected lcore 108 as core 0 on socket 1 00:05:19.786 EAL: Detected lcore 109 as core 1 on socket 1 00:05:19.786 EAL: Detected lcore 110 as core 2 on socket 1 00:05:19.786 EAL: Detected lcore 111 as core 3 on socket 1 00:05:19.786 EAL: Detected lcore 112 as core 4 on socket 1 00:05:19.786 EAL: Detected lcore 113 as core 5 on socket 1 00:05:19.786 EAL: Detected lcore 114 as core 6 on socket 1 00:05:19.786 EAL: Detected lcore 115 as core 7 on socket 1 00:05:19.786 EAL: Detected lcore 116 as core 8 on socket 1 00:05:19.786 EAL: Detected lcore 117 as core 9 on socket 1 00:05:19.786 EAL: Detected lcore 118 as core 10 on socket 1 00:05:19.786 EAL: Detected lcore 119 as core 11 on socket 1 00:05:19.786 EAL: Detected lcore 120 as core 12 on socket 1 00:05:19.786 EAL: Detected lcore 121 as core 13 on socket 1 00:05:19.786 EAL: Detected lcore 122 as core 14 on socket 1 00:05:19.786 EAL: Detected lcore 123 as core 15 on socket 1 00:05:19.786 EAL: Detected lcore 124 as core 16 on socket 1 00:05:19.786 EAL: Detected lcore 125 as core 17 on socket 1 00:05:19.786 EAL: Detected lcore 126 as core 18 on socket 1 00:05:19.786 EAL: Detected lcore 127 as core 19 on socket 1 00:05:19.786 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:19.786 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:19.786 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:19.786 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:19.786 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:19.786 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:19.786 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:19.786 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:19.786 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:19.786 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:19.786 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:19.786 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:19.786 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:19.786 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:19.786 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:19.786 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:19.786 EAL: Maximum logical cores by configuration: 128 00:05:19.786 EAL: Detected CPU lcores: 128 00:05:19.786 EAL: Detected NUMA nodes: 2 00:05:19.786 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:19.786 EAL: Detected shared linkage of DPDK 00:05:19.786 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.786 EAL: Bus pci wants IOVA as 'DC' 00:05:19.786 EAL: Buses did not request a specific IOVA mode. 00:05:19.786 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:19.786 EAL: Selected IOVA mode 'VA' 00:05:19.786 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.786 EAL: Probing VFIO support... 00:05:19.786 EAL: IOMMU type 1 (Type 1) is supported 00:05:19.786 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:19.786 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:19.786 EAL: VFIO support initialized 00:05:19.786 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.786 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.786 EAL: Setting up physically contiguous memory... 00:05:19.786 EAL: Setting maximum number of open files to 524288 00:05:19.786 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.786 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:19.786 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.786 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.786 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.786 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.786 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.786 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.786 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.786 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.786 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.786 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.786 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.786 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.786 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.786 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.786 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.786 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.786 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.786 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.786 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.786 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.786 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.787 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.787 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.787 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.787 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.787 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:19.787 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.787 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:19.787 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.787 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.787 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:19.787 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:19.787 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.787 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:19.787 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.787 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.787 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:19.787 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:19.787 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.787 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:19.787 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.787 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.787 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:19.787 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:19.787 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.787 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:19.787 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.787 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.787 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:19.787 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:19.787 EAL: Hugepages will be freed exactly as allocated. 00:05:19.787 EAL: No shared files mode enabled, IPC is disabled 00:05:19.787 EAL: No shared files mode enabled, IPC is disabled 00:05:19.787 EAL: TSC frequency is ~2400000 KHz 00:05:19.787 EAL: Main lcore 0 is ready (tid=7ff5e2b5fa00;cpuset=[0]) 00:05:19.787 EAL: Trying to obtain current memory policy. 00:05:19.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.047 EAL: Restoring previous memory policy: 0 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:20.047 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.047 00:05:20.047 00:05:20.047 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.047 http://cunit.sourceforge.net/ 00:05:20.047 00:05:20.047 00:05:20.047 Suite: components_suite 00:05:20.047 Test: vtophys_malloc_test ...passed 00:05:20.047 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.047 EAL: Restoring previous memory policy: 4 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.047 EAL: Trying to obtain current memory policy. 00:05:20.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.047 EAL: Restoring previous memory policy: 4 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.047 EAL: Trying to obtain current memory policy. 00:05:20.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.047 EAL: Restoring previous memory policy: 4 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.047 EAL: Trying to obtain current memory policy. 00:05:20.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.047 EAL: Restoring previous memory policy: 4 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.047 EAL: Trying to obtain current memory policy. 00:05:20.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.047 EAL: Restoring previous memory policy: 4 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.047 EAL: No shared files mode enabled, IPC is disabled 00:05:20.047 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.047 EAL: request: mp_malloc_sync 00:05:20.048 EAL: No shared files mode enabled, IPC is disabled 00:05:20.048 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.048 EAL: Trying to obtain current memory policy. 00:05:20.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.048 EAL: Restoring previous memory policy: 4 00:05:20.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.048 EAL: request: mp_malloc_sync 00:05:20.048 EAL: No shared files mode enabled, IPC is disabled 00:05:20.048 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.048 EAL: request: mp_malloc_sync 00:05:20.048 EAL: No shared files mode enabled, IPC is disabled 00:05:20.048 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.048 EAL: Trying to obtain current memory policy. 00:05:20.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.048 EAL: Restoring previous memory policy: 4 00:05:20.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.048 EAL: request: mp_malloc_sync 00:05:20.048 EAL: No shared files mode enabled, IPC is disabled 00:05:20.048 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.048 EAL: request: mp_malloc_sync 00:05:20.048 EAL: No shared files mode enabled, IPC is disabled 00:05:20.048 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.048 EAL: Trying to obtain current memory policy. 00:05:20.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.048 EAL: Restoring previous memory policy: 4 00:05:20.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.048 EAL: request: mp_malloc_sync 00:05:20.048 EAL: No shared files mode enabled, IPC is disabled 00:05:20.048 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.048 EAL: request: mp_malloc_sync 00:05:20.048 EAL: No shared files mode enabled, IPC is disabled 00:05:20.048 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.048 EAL: Trying to obtain current memory policy. 00:05:20.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.308 EAL: Restoring previous memory policy: 4 00:05:20.308 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.308 EAL: request: mp_malloc_sync 00:05:20.308 EAL: No shared files mode enabled, IPC is disabled 00:05:20.308 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.308 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.308 EAL: request: mp_malloc_sync 00:05:20.308 EAL: No shared files mode enabled, IPC is disabled 00:05:20.308 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.308 EAL: Trying to obtain current memory policy. 00:05:20.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.308 EAL: Restoring previous memory policy: 4 00:05:20.308 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.308 EAL: request: mp_malloc_sync 00:05:20.308 EAL: No shared files mode enabled, IPC is disabled 00:05:20.308 EAL: Heap on socket 0 was expanded by 1026MB 00:05:20.570 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.570 EAL: request: mp_malloc_sync 00:05:20.570 EAL: No shared files mode enabled, IPC is disabled 00:05:20.570 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:20.570 passed 00:05:20.570 00:05:20.570 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.570 suites 1 1 n/a 0 0 00:05:20.570 tests 2 2 2 0 0 00:05:20.570 asserts 497 497 497 0 n/a 00:05:20.570 00:05:20.570 Elapsed time = 0.645 seconds 00:05:20.570 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.570 EAL: request: mp_malloc_sync 00:05:20.570 EAL: No shared files mode enabled, IPC is disabled 00:05:20.570 EAL: Heap on socket 0 was shrunk by 2MB 00:05:20.570 EAL: No shared files mode enabled, IPC is disabled 00:05:20.570 EAL: No shared files mode enabled, IPC is disabled 00:05:20.570 EAL: No shared files mode enabled, IPC is disabled 00:05:20.570 00:05:20.570 real 0m0.774s 00:05:20.570 user 0m0.412s 00:05:20.570 sys 0m0.323s 00:05:20.570 11:43:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.570 11:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:20.570 ************************************ 00:05:20.570 END TEST env_vtophys 00:05:20.570 ************************************ 00:05:20.570 11:43:14 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.570 11:43:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.570 11:43:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.570 11:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:20.570 ************************************ 00:05:20.570 START TEST env_pci 00:05:20.570 ************************************ 00:05:20.570 11:43:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.570 00:05:20.570 00:05:20.570 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.570 http://cunit.sourceforge.net/ 00:05:20.570 00:05:20.570 00:05:20.570 Suite: pci 00:05:20.570 Test: pci_hook ...[2024-06-10 11:43:14.337466] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1726644 has claimed it 00:05:20.830 EAL: Cannot find device (10000:00:01.0) 00:05:20.830 EAL: Failed to attach device on primary process 00:05:20.830 passed 00:05:20.830 00:05:20.830 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.830 suites 1 1 n/a 0 0 00:05:20.830 tests 1 1 1 0 0 00:05:20.830 asserts 25 25 25 0 n/a 00:05:20.830 00:05:20.830 Elapsed time = 0.033 seconds 00:05:20.830 00:05:20.830 real 0m0.054s 00:05:20.830 user 0m0.018s 00:05:20.830 sys 0m0.036s 00:05:20.830 11:43:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.830 11:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:20.830 ************************************ 00:05:20.830 END TEST env_pci 00:05:20.830 ************************************ 00:05:20.830 11:43:14 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:20.830 11:43:14 -- env/env.sh@15 -- # uname 00:05:20.830 11:43:14 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:20.830 11:43:14 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:20.830 11:43:14 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.830 11:43:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:20.830 11:43:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.830 11:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:20.830 ************************************ 00:05:20.830 START TEST env_dpdk_post_init 00:05:20.830 ************************************ 00:05:20.831 11:43:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.831 EAL: Detected CPU lcores: 128 00:05:20.831 EAL: Detected NUMA nodes: 2 00:05:20.831 EAL: Detected shared linkage of DPDK 00:05:20.831 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.831 EAL: Selected IOVA mode 'VA' 00:05:20.831 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.831 EAL: VFIO support initialized 00:05:20.831 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.831 EAL: Using IOMMU type 1 (Type 1) 00:05:21.091 EAL: Ignore mapping IO port bar(1) 00:05:21.091 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:21.351 EAL: Ignore mapping IO port bar(1) 00:05:21.351 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:21.351 EAL: Ignore mapping IO port bar(1) 00:05:21.611 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:21.611 EAL: Ignore mapping IO port bar(1) 00:05:21.871 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:21.871 EAL: Ignore mapping IO port bar(1) 00:05:22.131 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:22.131 EAL: Ignore mapping IO port bar(1) 00:05:22.131 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:22.391 EAL: Ignore mapping IO port bar(1) 00:05:22.391 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:22.678 EAL: Ignore mapping IO port bar(1) 00:05:22.678 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:22.938 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:22.938 EAL: Ignore mapping IO port bar(1) 00:05:23.199 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:23.199 EAL: Ignore mapping IO port bar(1) 00:05:23.459 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:23.459 EAL: Ignore mapping IO port bar(1) 00:05:23.459 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:23.719 EAL: Ignore mapping IO port bar(1) 00:05:23.719 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:23.979 EAL: Ignore mapping IO port bar(1) 00:05:23.979 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:24.239 EAL: Ignore mapping IO port bar(1) 00:05:24.239 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:24.500 EAL: Ignore mapping IO port bar(1) 00:05:24.500 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:24.500 EAL: Ignore mapping IO port bar(1) 00:05:24.760 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:24.760 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:24.760 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:24.760 Starting DPDK initialization... 00:05:24.760 Starting SPDK post initialization... 00:05:24.760 SPDK NVMe probe 00:05:24.760 Attaching to 0000:65:00.0 00:05:24.760 Attached to 0000:65:00.0 00:05:24.760 Cleaning up... 00:05:26.744 00:05:26.744 real 0m5.715s 00:05:26.744 user 0m0.186s 00:05:26.744 sys 0m0.071s 00:05:26.744 11:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.744 11:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:26.744 ************************************ 00:05:26.744 END TEST env_dpdk_post_init 00:05:26.744 ************************************ 00:05:26.744 11:43:20 -- env/env.sh@26 -- # uname 00:05:26.744 11:43:20 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:26.744 11:43:20 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:26.744 11:43:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.744 11:43:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.744 11:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:26.744 ************************************ 00:05:26.744 START TEST env_mem_callbacks 00:05:26.744 ************************************ 00:05:26.744 11:43:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:26.745 EAL: Detected CPU lcores: 128 00:05:26.745 EAL: Detected NUMA nodes: 2 00:05:26.745 EAL: Detected shared linkage of DPDK 00:05:26.745 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:26.745 EAL: Selected IOVA mode 'VA' 00:05:26.745 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.745 EAL: VFIO support initialized 00:05:26.745 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:26.745 00:05:26.745 00:05:26.745 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.745 http://cunit.sourceforge.net/ 00:05:26.745 00:05:26.745 00:05:26.745 Suite: memory 00:05:26.745 Test: test ... 00:05:26.745 register 0x200000200000 2097152 00:05:26.745 malloc 3145728 00:05:26.745 register 0x200000400000 4194304 00:05:26.745 buf 0x200000500000 len 3145728 PASSED 00:05:26.745 malloc 64 00:05:26.745 buf 0x2000004fff40 len 64 PASSED 00:05:26.745 malloc 4194304 00:05:26.745 register 0x200000800000 6291456 00:05:26.745 buf 0x200000a00000 len 4194304 PASSED 00:05:26.745 free 0x200000500000 3145728 00:05:26.745 free 0x2000004fff40 64 00:05:26.745 unregister 0x200000400000 4194304 PASSED 00:05:26.745 free 0x200000a00000 4194304 00:05:26.745 unregister 0x200000800000 6291456 PASSED 00:05:26.745 malloc 8388608 00:05:26.745 register 0x200000400000 10485760 00:05:26.745 buf 0x200000600000 len 8388608 PASSED 00:05:26.745 free 0x200000600000 8388608 00:05:26.745 unregister 0x200000400000 10485760 PASSED 00:05:26.745 passed 00:05:26.745 00:05:26.745 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.745 suites 1 1 n/a 0 0 00:05:26.745 tests 1 1 1 0 0 00:05:26.745 asserts 15 15 15 0 n/a 00:05:26.745 00:05:26.745 Elapsed time = 0.004 seconds 00:05:26.745 00:05:26.745 real 0m0.057s 00:05:26.745 user 0m0.017s 00:05:26.745 sys 0m0.040s 00:05:26.745 11:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.745 11:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:26.745 ************************************ 00:05:26.745 END TEST env_mem_callbacks 00:05:26.745 ************************************ 00:05:26.745 00:05:26.745 real 0m7.125s 00:05:26.745 user 0m0.952s 00:05:26.745 sys 0m0.713s 00:05:26.745 11:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.745 11:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:26.745 ************************************ 00:05:26.745 END TEST env 00:05:26.745 ************************************ 00:05:26.745 11:43:20 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:26.745 11:43:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.745 11:43:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.745 11:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:26.745 ************************************ 00:05:26.745 START TEST rpc 00:05:26.745 ************************************ 00:05:26.745 11:43:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:26.745 * Looking for test storage... 00:05:26.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.745 11:43:20 -- rpc/rpc.sh@65 -- # spdk_pid=1727815 00:05:26.745 11:43:20 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.745 11:43:20 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:26.745 11:43:20 -- rpc/rpc.sh@67 -- # waitforlisten 1727815 00:05:26.745 11:43:20 -- common/autotest_common.sh@819 -- # '[' -z 1727815 ']' 00:05:26.745 11:43:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.745 11:43:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.745 11:43:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.745 11:43:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.745 11:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:26.745 [2024-06-10 11:43:20.475350] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:26.745 [2024-06-10 11:43:20.475420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727815 ] 00:05:26.745 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.005 [2024-06-10 11:43:20.541758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.005 [2024-06-10 11:43:20.617259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.005 [2024-06-10 11:43:20.617399] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:27.005 [2024-06-10 11:43:20.617409] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1727815' to capture a snapshot of events at runtime. 00:05:27.005 [2024-06-10 11:43:20.617417] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1727815 for offline analysis/debug. 00:05:27.005 [2024-06-10 11:43:20.617440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.576 11:43:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:27.576 11:43:21 -- common/autotest_common.sh@852 -- # return 0 00:05:27.576 11:43:21 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.576 11:43:21 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.576 11:43:21 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:27.576 11:43:21 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:27.576 11:43:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.576 11:43:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.576 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.576 ************************************ 00:05:27.576 START TEST rpc_integrity 00:05:27.576 ************************************ 00:05:27.576 11:43:21 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:27.576 11:43:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.576 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.576 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.576 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.576 11:43:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.576 11:43:21 -- rpc/rpc.sh@13 -- # jq length 00:05:27.576 11:43:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.576 11:43:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.576 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.576 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.576 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.576 11:43:21 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:27.576 11:43:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.576 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.576 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.576 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.838 11:43:21 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.838 { 00:05:27.838 "name": "Malloc0", 00:05:27.838 "aliases": [ 00:05:27.838 "5d97425b-4b58-40a6-9841-4d7c72b16a63" 00:05:27.838 ], 00:05:27.838 "product_name": "Malloc disk", 00:05:27.838 "block_size": 512, 00:05:27.838 "num_blocks": 16384, 00:05:27.838 "uuid": "5d97425b-4b58-40a6-9841-4d7c72b16a63", 00:05:27.838 "assigned_rate_limits": { 00:05:27.838 "rw_ios_per_sec": 0, 00:05:27.838 "rw_mbytes_per_sec": 0, 00:05:27.838 "r_mbytes_per_sec": 0, 00:05:27.838 "w_mbytes_per_sec": 0 00:05:27.838 }, 00:05:27.838 "claimed": false, 00:05:27.838 "zoned": false, 00:05:27.838 "supported_io_types": { 00:05:27.838 "read": true, 00:05:27.838 "write": true, 00:05:27.838 "unmap": true, 00:05:27.838 "write_zeroes": true, 00:05:27.838 "flush": true, 00:05:27.838 "reset": true, 00:05:27.838 "compare": false, 00:05:27.838 "compare_and_write": false, 00:05:27.838 "abort": true, 00:05:27.838 "nvme_admin": false, 00:05:27.838 "nvme_io": false 00:05:27.838 }, 00:05:27.838 "memory_domains": [ 00:05:27.838 { 00:05:27.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.838 "dma_device_type": 2 00:05:27.838 } 00:05:27.838 ], 00:05:27.838 "driver_specific": {} 00:05:27.838 } 00:05:27.838 ]' 00:05:27.838 11:43:21 -- rpc/rpc.sh@17 -- # jq length 00:05:27.838 11:43:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.838 11:43:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:27.838 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.838 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.838 [2024-06-10 11:43:21.398030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:27.838 [2024-06-10 11:43:21.398064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.838 [2024-06-10 11:43:21.398076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24a7d00 00:05:27.838 [2024-06-10 11:43:21.398083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.838 [2024-06-10 11:43:21.399449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.838 [2024-06-10 11:43:21.399469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.838 Passthru0 00:05:27.838 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.838 11:43:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.838 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.838 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.838 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.838 11:43:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.838 { 00:05:27.838 "name": "Malloc0", 00:05:27.838 "aliases": [ 00:05:27.838 "5d97425b-4b58-40a6-9841-4d7c72b16a63" 00:05:27.838 ], 00:05:27.838 "product_name": "Malloc disk", 00:05:27.838 "block_size": 512, 00:05:27.838 "num_blocks": 16384, 00:05:27.838 "uuid": "5d97425b-4b58-40a6-9841-4d7c72b16a63", 00:05:27.838 "assigned_rate_limits": { 00:05:27.838 "rw_ios_per_sec": 0, 00:05:27.838 "rw_mbytes_per_sec": 0, 00:05:27.838 "r_mbytes_per_sec": 0, 00:05:27.838 "w_mbytes_per_sec": 0 00:05:27.838 }, 00:05:27.838 "claimed": true, 00:05:27.838 "claim_type": "exclusive_write", 00:05:27.838 "zoned": false, 00:05:27.838 "supported_io_types": { 00:05:27.838 "read": true, 00:05:27.838 "write": true, 00:05:27.838 "unmap": true, 00:05:27.838 "write_zeroes": true, 00:05:27.838 "flush": true, 00:05:27.838 "reset": true, 00:05:27.838 "compare": false, 00:05:27.838 "compare_and_write": false, 00:05:27.838 "abort": true, 00:05:27.838 "nvme_admin": false, 00:05:27.838 "nvme_io": false 00:05:27.838 }, 00:05:27.838 "memory_domains": [ 00:05:27.838 { 00:05:27.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.838 "dma_device_type": 2 00:05:27.838 } 00:05:27.838 ], 00:05:27.838 "driver_specific": {} 00:05:27.838 }, 00:05:27.838 { 00:05:27.838 "name": "Passthru0", 00:05:27.838 "aliases": [ 00:05:27.838 "80f35567-d5b3-5d76-a4ea-3185f6fbefef" 00:05:27.838 ], 00:05:27.838 "product_name": "passthru", 00:05:27.838 "block_size": 512, 00:05:27.838 "num_blocks": 16384, 00:05:27.838 "uuid": "80f35567-d5b3-5d76-a4ea-3185f6fbefef", 00:05:27.838 "assigned_rate_limits": { 00:05:27.838 "rw_ios_per_sec": 0, 00:05:27.838 "rw_mbytes_per_sec": 0, 00:05:27.838 "r_mbytes_per_sec": 0, 00:05:27.838 "w_mbytes_per_sec": 0 00:05:27.838 }, 00:05:27.838 "claimed": false, 00:05:27.838 "zoned": false, 00:05:27.838 "supported_io_types": { 00:05:27.838 "read": true, 00:05:27.838 "write": true, 00:05:27.838 "unmap": true, 00:05:27.838 "write_zeroes": true, 00:05:27.838 "flush": true, 00:05:27.838 "reset": true, 00:05:27.838 "compare": false, 00:05:27.838 "compare_and_write": false, 00:05:27.838 "abort": true, 00:05:27.838 "nvme_admin": false, 00:05:27.838 "nvme_io": false 00:05:27.838 }, 00:05:27.838 "memory_domains": [ 00:05:27.838 { 00:05:27.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.838 "dma_device_type": 2 00:05:27.838 } 00:05:27.838 ], 00:05:27.838 "driver_specific": { 00:05:27.838 "passthru": { 00:05:27.838 "name": "Passthru0", 00:05:27.838 "base_bdev_name": "Malloc0" 00:05:27.838 } 00:05:27.838 } 00:05:27.838 } 00:05:27.838 ]' 00:05:27.838 11:43:21 -- rpc/rpc.sh@21 -- # jq length 00:05:27.838 11:43:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.838 11:43:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.838 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.838 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.838 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.838 11:43:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:27.838 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.838 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.838 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.838 11:43:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.838 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.838 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.838 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.838 11:43:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.838 11:43:21 -- rpc/rpc.sh@26 -- # jq length 00:05:27.838 11:43:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.838 00:05:27.838 real 0m0.288s 00:05:27.838 user 0m0.178s 00:05:27.838 sys 0m0.039s 00:05:27.838 11:43:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.838 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.838 ************************************ 00:05:27.838 END TEST rpc_integrity 00:05:27.838 ************************************ 00:05:27.838 11:43:21 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:27.838 11:43:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.838 11:43:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.838 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.838 ************************************ 00:05:27.838 START TEST rpc_plugins 00:05:27.838 ************************************ 00:05:27.838 11:43:21 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:27.838 11:43:21 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:27.838 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.838 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.838 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.838 11:43:21 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:27.838 11:43:21 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:27.838 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.838 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.099 11:43:21 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:28.099 { 00:05:28.099 "name": "Malloc1", 00:05:28.099 "aliases": [ 00:05:28.099 "8eec2bae-fbae-4455-8638-ab7fddeb6d2a" 00:05:28.099 ], 00:05:28.099 "product_name": "Malloc disk", 00:05:28.099 "block_size": 4096, 00:05:28.099 "num_blocks": 256, 00:05:28.099 "uuid": "8eec2bae-fbae-4455-8638-ab7fddeb6d2a", 00:05:28.099 "assigned_rate_limits": { 00:05:28.099 "rw_ios_per_sec": 0, 00:05:28.099 "rw_mbytes_per_sec": 0, 00:05:28.099 "r_mbytes_per_sec": 0, 00:05:28.099 "w_mbytes_per_sec": 0 00:05:28.099 }, 00:05:28.099 "claimed": false, 00:05:28.099 "zoned": false, 00:05:28.099 "supported_io_types": { 00:05:28.099 "read": true, 00:05:28.099 "write": true, 00:05:28.099 "unmap": true, 00:05:28.099 "write_zeroes": true, 00:05:28.099 "flush": true, 00:05:28.099 "reset": true, 00:05:28.099 "compare": false, 00:05:28.099 "compare_and_write": false, 00:05:28.099 "abort": true, 00:05:28.099 "nvme_admin": false, 00:05:28.099 "nvme_io": false 00:05:28.099 }, 00:05:28.099 "memory_domains": [ 00:05:28.099 { 00:05:28.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.099 "dma_device_type": 2 00:05:28.099 } 00:05:28.099 ], 00:05:28.099 "driver_specific": {} 00:05:28.099 } 00:05:28.099 ]' 00:05:28.099 11:43:21 -- rpc/rpc.sh@32 -- # jq length 00:05:28.099 11:43:21 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:28.099 11:43:21 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:28.099 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.099 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.099 11:43:21 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:28.099 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.099 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.099 11:43:21 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:28.099 11:43:21 -- rpc/rpc.sh@36 -- # jq length 00:05:28.099 11:43:21 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:28.099 00:05:28.099 real 0m0.134s 00:05:28.099 user 0m0.083s 00:05:28.099 sys 0m0.017s 00:05:28.099 11:43:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.099 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 ************************************ 00:05:28.099 END TEST rpc_plugins 00:05:28.099 ************************************ 00:05:28.099 11:43:21 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:28.099 11:43:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.099 11:43:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.099 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 ************************************ 00:05:28.099 START TEST rpc_trace_cmd_test 00:05:28.099 ************************************ 00:05:28.099 11:43:21 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:28.099 11:43:21 -- rpc/rpc.sh@40 -- # local info 00:05:28.099 11:43:21 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:28.099 11:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.099 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 11:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.099 11:43:21 -- rpc/rpc.sh@42 -- # info='{ 00:05:28.099 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1727815", 00:05:28.099 "tpoint_group_mask": "0x8", 00:05:28.099 "iscsi_conn": { 00:05:28.099 "mask": "0x2", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "scsi": { 00:05:28.099 "mask": "0x4", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "bdev": { 00:05:28.099 "mask": "0x8", 00:05:28.099 "tpoint_mask": "0xffffffffffffffff" 00:05:28.099 }, 00:05:28.099 "nvmf_rdma": { 00:05:28.099 "mask": "0x10", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "nvmf_tcp": { 00:05:28.099 "mask": "0x20", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "ftl": { 00:05:28.099 "mask": "0x40", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "blobfs": { 00:05:28.099 "mask": "0x80", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "dsa": { 00:05:28.099 "mask": "0x200", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "thread": { 00:05:28.099 "mask": "0x400", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "nvme_pcie": { 00:05:28.099 "mask": "0x800", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "iaa": { 00:05:28.099 "mask": "0x1000", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "nvme_tcp": { 00:05:28.099 "mask": "0x2000", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 }, 00:05:28.099 "bdev_nvme": { 00:05:28.099 "mask": "0x4000", 00:05:28.099 "tpoint_mask": "0x0" 00:05:28.099 } 00:05:28.099 }' 00:05:28.099 11:43:21 -- rpc/rpc.sh@43 -- # jq length 00:05:28.099 11:43:21 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:28.099 11:43:21 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:28.099 11:43:21 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:28.099 11:43:21 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:28.361 11:43:21 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:28.361 11:43:21 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:28.361 11:43:21 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:28.361 11:43:21 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:28.361 11:43:21 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:28.361 00:05:28.361 real 0m0.226s 00:05:28.361 user 0m0.194s 00:05:28.361 sys 0m0.024s 00:05:28.361 11:43:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.361 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.361 ************************************ 00:05:28.361 END TEST rpc_trace_cmd_test 00:05:28.361 ************************************ 00:05:28.361 11:43:22 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:28.361 11:43:22 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:28.361 11:43:22 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:28.361 11:43:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.361 11:43:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.361 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.361 ************************************ 00:05:28.361 START TEST rpc_daemon_integrity 00:05:28.361 ************************************ 00:05:28.361 11:43:22 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:28.361 11:43:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:28.361 11:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.361 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.361 11:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.361 11:43:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:28.361 11:43:22 -- rpc/rpc.sh@13 -- # jq length 00:05:28.361 11:43:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:28.361 11:43:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:28.361 11:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.361 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.361 11:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.361 11:43:22 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:28.361 11:43:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:28.361 11:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.361 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.361 11:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.361 11:43:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:28.361 { 00:05:28.361 "name": "Malloc2", 00:05:28.361 "aliases": [ 00:05:28.361 "4d29dfa3-3b09-41c4-96ab-c25716af6370" 00:05:28.361 ], 00:05:28.361 "product_name": "Malloc disk", 00:05:28.361 "block_size": 512, 00:05:28.361 "num_blocks": 16384, 00:05:28.361 "uuid": "4d29dfa3-3b09-41c4-96ab-c25716af6370", 00:05:28.361 "assigned_rate_limits": { 00:05:28.361 "rw_ios_per_sec": 0, 00:05:28.361 "rw_mbytes_per_sec": 0, 00:05:28.361 "r_mbytes_per_sec": 0, 00:05:28.361 "w_mbytes_per_sec": 0 00:05:28.361 }, 00:05:28.361 "claimed": false, 00:05:28.361 "zoned": false, 00:05:28.361 "supported_io_types": { 00:05:28.361 "read": true, 00:05:28.361 "write": true, 00:05:28.361 "unmap": true, 00:05:28.361 "write_zeroes": true, 00:05:28.361 "flush": true, 00:05:28.361 "reset": true, 00:05:28.361 "compare": false, 00:05:28.361 "compare_and_write": false, 00:05:28.361 "abort": true, 00:05:28.361 "nvme_admin": false, 00:05:28.361 "nvme_io": false 00:05:28.361 }, 00:05:28.361 "memory_domains": [ 00:05:28.361 { 00:05:28.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.361 "dma_device_type": 2 00:05:28.361 } 00:05:28.361 ], 00:05:28.361 "driver_specific": {} 00:05:28.361 } 00:05:28.361 ]' 00:05:28.361 11:43:22 -- rpc/rpc.sh@17 -- # jq length 00:05:28.620 11:43:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:28.621 11:43:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:28.621 11:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.621 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.621 [2024-06-10 11:43:22.168138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:28.621 [2024-06-10 11:43:22.168171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:28.621 [2024-06-10 11:43:22.168184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26554e0 00:05:28.621 [2024-06-10 11:43:22.168191] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:28.621 [2024-06-10 11:43:22.169399] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:28.621 [2024-06-10 11:43:22.169418] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:28.621 Passthru0 00:05:28.621 11:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.621 11:43:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:28.621 11:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.621 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.621 11:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.621 11:43:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:28.621 { 00:05:28.621 "name": "Malloc2", 00:05:28.621 "aliases": [ 00:05:28.621 "4d29dfa3-3b09-41c4-96ab-c25716af6370" 00:05:28.621 ], 00:05:28.621 "product_name": "Malloc disk", 00:05:28.621 "block_size": 512, 00:05:28.621 "num_blocks": 16384, 00:05:28.621 "uuid": "4d29dfa3-3b09-41c4-96ab-c25716af6370", 00:05:28.621 "assigned_rate_limits": { 00:05:28.621 "rw_ios_per_sec": 0, 00:05:28.621 "rw_mbytes_per_sec": 0, 00:05:28.621 "r_mbytes_per_sec": 0, 00:05:28.621 "w_mbytes_per_sec": 0 00:05:28.621 }, 00:05:28.621 "claimed": true, 00:05:28.621 "claim_type": "exclusive_write", 00:05:28.621 "zoned": false, 00:05:28.621 "supported_io_types": { 00:05:28.621 "read": true, 00:05:28.621 "write": true, 00:05:28.621 "unmap": true, 00:05:28.621 "write_zeroes": true, 00:05:28.621 "flush": true, 00:05:28.621 "reset": true, 00:05:28.621 "compare": false, 00:05:28.621 "compare_and_write": false, 00:05:28.621 "abort": true, 00:05:28.621 "nvme_admin": false, 00:05:28.621 "nvme_io": false 00:05:28.621 }, 00:05:28.621 "memory_domains": [ 00:05:28.621 { 00:05:28.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.621 "dma_device_type": 2 00:05:28.621 } 00:05:28.621 ], 00:05:28.621 "driver_specific": {} 00:05:28.621 }, 00:05:28.621 { 00:05:28.621 "name": "Passthru0", 00:05:28.621 "aliases": [ 00:05:28.621 "6acacc70-dbf8-5b9b-bb8a-d379f4dc2193" 00:05:28.621 ], 00:05:28.621 "product_name": "passthru", 00:05:28.621 "block_size": 512, 00:05:28.621 "num_blocks": 16384, 00:05:28.621 "uuid": "6acacc70-dbf8-5b9b-bb8a-d379f4dc2193", 00:05:28.621 "assigned_rate_limits": { 00:05:28.621 "rw_ios_per_sec": 0, 00:05:28.621 "rw_mbytes_per_sec": 0, 00:05:28.621 "r_mbytes_per_sec": 0, 00:05:28.621 "w_mbytes_per_sec": 0 00:05:28.621 }, 00:05:28.621 "claimed": false, 00:05:28.621 "zoned": false, 00:05:28.621 "supported_io_types": { 00:05:28.621 "read": true, 00:05:28.621 "write": true, 00:05:28.621 "unmap": true, 00:05:28.621 "write_zeroes": true, 00:05:28.621 "flush": true, 00:05:28.621 "reset": true, 00:05:28.621 "compare": false, 00:05:28.621 "compare_and_write": false, 00:05:28.621 "abort": true, 00:05:28.621 "nvme_admin": false, 00:05:28.621 "nvme_io": false 00:05:28.621 }, 00:05:28.621 "memory_domains": [ 00:05:28.621 { 00:05:28.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.621 "dma_device_type": 2 00:05:28.621 } 00:05:28.621 ], 00:05:28.621 "driver_specific": { 00:05:28.621 "passthru": { 00:05:28.621 "name": "Passthru0", 00:05:28.621 "base_bdev_name": "Malloc2" 00:05:28.621 } 00:05:28.621 } 00:05:28.621 } 00:05:28.621 ]' 00:05:28.621 11:43:22 -- rpc/rpc.sh@21 -- # jq length 00:05:28.621 11:43:22 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:28.621 11:43:22 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:28.621 11:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.621 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.621 11:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.621 11:43:22 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:28.621 11:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.621 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.621 11:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.621 11:43:22 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:28.621 11:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.621 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.621 11:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.621 11:43:22 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:28.621 11:43:22 -- rpc/rpc.sh@26 -- # jq length 00:05:28.621 11:43:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:28.621 00:05:28.621 real 0m0.284s 00:05:28.621 user 0m0.183s 00:05:28.621 sys 0m0.039s 00:05:28.621 11:43:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.621 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.621 ************************************ 00:05:28.621 END TEST rpc_daemon_integrity 00:05:28.621 ************************************ 00:05:28.621 11:43:22 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:28.621 11:43:22 -- rpc/rpc.sh@84 -- # killprocess 1727815 00:05:28.621 11:43:22 -- common/autotest_common.sh@926 -- # '[' -z 1727815 ']' 00:05:28.621 11:43:22 -- common/autotest_common.sh@930 -- # kill -0 1727815 00:05:28.621 11:43:22 -- common/autotest_common.sh@931 -- # uname 00:05:28.621 11:43:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:28.621 11:43:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1727815 00:05:28.881 11:43:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:28.881 11:43:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:28.881 11:43:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1727815' 00:05:28.881 killing process with pid 1727815 00:05:28.881 11:43:22 -- common/autotest_common.sh@945 -- # kill 1727815 00:05:28.881 11:43:22 -- common/autotest_common.sh@950 -- # wait 1727815 00:05:28.881 00:05:28.881 real 0m2.290s 00:05:28.881 user 0m2.959s 00:05:28.881 sys 0m0.637s 00:05:28.881 11:43:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.881 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.881 ************************************ 00:05:28.881 END TEST rpc 00:05:28.881 ************************************ 00:05:28.881 11:43:22 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.140 11:43:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.140 11:43:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.140 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.140 ************************************ 00:05:29.140 START TEST rpc_client 00:05:29.140 ************************************ 00:05:29.140 11:43:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.140 * Looking for test storage... 00:05:29.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:29.140 11:43:22 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:29.140 OK 00:05:29.140 11:43:22 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.140 00:05:29.140 real 0m0.119s 00:05:29.140 user 0m0.054s 00:05:29.140 sys 0m0.073s 00:05:29.140 11:43:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.140 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.140 ************************************ 00:05:29.140 END TEST rpc_client 00:05:29.140 ************************************ 00:05:29.140 11:43:22 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:29.140 11:43:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.140 11:43:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.140 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.140 ************************************ 00:05:29.140 START TEST json_config 00:05:29.140 ************************************ 00:05:29.140 11:43:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:29.140 11:43:22 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.140 11:43:22 -- nvmf/common.sh@7 -- # uname -s 00:05:29.140 11:43:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.140 11:43:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.140 11:43:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.140 11:43:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.140 11:43:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.140 11:43:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.140 11:43:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.140 11:43:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.140 11:43:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.140 11:43:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.140 11:43:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:29.140 11:43:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:29.140 11:43:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.140 11:43:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.140 11:43:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.140 11:43:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:29.140 11:43:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.140 11:43:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.401 11:43:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.401 11:43:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.401 11:43:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.401 11:43:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.401 11:43:22 -- paths/export.sh@5 -- # export PATH 00:05:29.401 11:43:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.401 11:43:22 -- nvmf/common.sh@46 -- # : 0 00:05:29.401 11:43:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:29.401 11:43:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:29.401 11:43:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:29.401 11:43:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.401 11:43:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.401 11:43:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:29.401 11:43:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:29.401 11:43:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:29.401 11:43:22 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:29.401 11:43:22 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:29.401 11:43:22 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:29.401 11:43:22 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:29.401 11:43:22 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:29.401 11:43:22 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:29.401 11:43:22 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:29.401 11:43:22 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:29.401 11:43:22 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:29.401 11:43:22 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:29.401 11:43:22 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:29.401 11:43:22 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:29.401 11:43:22 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:29.401 11:43:22 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.401 11:43:22 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:29.401 INFO: JSON configuration test init 00:05:29.401 11:43:22 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:29.401 11:43:22 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:29.401 11:43:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.401 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.401 11:43:22 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:29.401 11:43:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.401 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.401 11:43:22 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:29.401 11:43:22 -- json_config/json_config.sh@98 -- # local app=target 00:05:29.401 11:43:22 -- json_config/json_config.sh@99 -- # shift 00:05:29.401 11:43:22 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:29.401 11:43:22 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:29.401 11:43:22 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:29.401 11:43:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:29.401 11:43:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:29.401 11:43:22 -- json_config/json_config.sh@111 -- # app_pid[$app]=1728653 00:05:29.401 11:43:22 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:29.401 Waiting for target to run... 00:05:29.401 11:43:22 -- json_config/json_config.sh@114 -- # waitforlisten 1728653 /var/tmp/spdk_tgt.sock 00:05:29.401 11:43:22 -- common/autotest_common.sh@819 -- # '[' -z 1728653 ']' 00:05:29.401 11:43:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.401 11:43:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.401 11:43:22 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:29.401 11:43:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.401 11:43:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.401 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.401 [2024-06-10 11:43:22.986007] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:29.401 [2024-06-10 11:43:22.986076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1728653 ] 00:05:29.401 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.662 [2024-06-10 11:43:23.327337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.662 [2024-06-10 11:43:23.377037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.662 [2024-06-10 11:43:23.377162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.235 11:43:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.235 11:43:23 -- common/autotest_common.sh@852 -- # return 0 00:05:30.235 11:43:23 -- json_config/json_config.sh@115 -- # echo '' 00:05:30.235 00:05:30.235 11:43:23 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:30.235 11:43:23 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:30.235 11:43:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:30.235 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:05:30.235 11:43:23 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:30.235 11:43:23 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:30.235 11:43:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:30.235 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:05:30.235 11:43:23 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:30.235 11:43:23 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:30.235 11:43:23 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:30.807 11:43:24 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:30.807 11:43:24 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:30.807 11:43:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:30.807 11:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:30.807 11:43:24 -- json_config/json_config.sh@48 -- # local ret=0 00:05:30.807 11:43:24 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:30.807 11:43:24 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:30.807 11:43:24 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:30.807 11:43:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:30.807 11:43:24 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:30.807 11:43:24 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:30.807 11:43:24 -- json_config/json_config.sh@51 -- # local get_types 00:05:30.807 11:43:24 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:30.807 11:43:24 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:30.807 11:43:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:30.807 11:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:30.807 11:43:24 -- json_config/json_config.sh@58 -- # return 0 00:05:30.807 11:43:24 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:30.807 11:43:24 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:30.807 11:43:24 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:30.807 11:43:24 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:30.807 11:43:24 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:30.807 11:43:24 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:30.807 11:43:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:30.807 11:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:30.807 11:43:24 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:30.807 11:43:24 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:30.807 11:43:24 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:30.807 11:43:24 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.807 11:43:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.068 MallocForNvmf0 00:05:31.068 11:43:24 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.068 11:43:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.329 MallocForNvmf1 00:05:31.329 11:43:24 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:31.329 11:43:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:31.329 [2024-06-10 11:43:24.995074] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.329 11:43:25 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.329 11:43:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.590 11:43:25 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.590 11:43:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.590 11:43:25 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.590 11:43:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.850 11:43:25 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:31.850 11:43:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:31.850 [2024-06-10 11:43:25.613140] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.111 11:43:25 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:32.111 11:43:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:32.111 11:43:25 -- common/autotest_common.sh@10 -- # set +x 00:05:32.111 11:43:25 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:32.111 11:43:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:32.111 11:43:25 -- common/autotest_common.sh@10 -- # set +x 00:05:32.111 11:43:25 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:32.111 11:43:25 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.111 11:43:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.111 MallocBdevForConfigChangeCheck 00:05:32.111 11:43:25 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:32.111 11:43:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:32.111 11:43:25 -- common/autotest_common.sh@10 -- # set +x 00:05:32.372 11:43:25 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:32.372 11:43:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.633 11:43:26 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:32.633 INFO: shutting down applications... 00:05:32.633 11:43:26 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:32.633 11:43:26 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:32.633 11:43:26 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:32.633 11:43:26 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:32.893 Calling clear_iscsi_subsystem 00:05:32.893 Calling clear_nvmf_subsystem 00:05:32.893 Calling clear_nbd_subsystem 00:05:32.893 Calling clear_ublk_subsystem 00:05:32.893 Calling clear_vhost_blk_subsystem 00:05:32.893 Calling clear_vhost_scsi_subsystem 00:05:32.893 Calling clear_scheduler_subsystem 00:05:32.893 Calling clear_bdev_subsystem 00:05:32.893 Calling clear_accel_subsystem 00:05:32.893 Calling clear_vmd_subsystem 00:05:32.893 Calling clear_sock_subsystem 00:05:32.893 Calling clear_iobuf_subsystem 00:05:32.893 11:43:26 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:32.893 11:43:26 -- json_config/json_config.sh@396 -- # count=100 00:05:32.893 11:43:26 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:32.893 11:43:26 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.893 11:43:26 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:32.893 11:43:26 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:33.153 11:43:26 -- json_config/json_config.sh@398 -- # break 00:05:33.153 11:43:26 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:33.153 11:43:26 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:33.153 11:43:26 -- json_config/json_config.sh@120 -- # local app=target 00:05:33.153 11:43:26 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:33.153 11:43:26 -- json_config/json_config.sh@124 -- # [[ -n 1728653 ]] 00:05:33.153 11:43:26 -- json_config/json_config.sh@127 -- # kill -SIGINT 1728653 00:05:33.153 11:43:26 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:33.153 11:43:26 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:33.153 11:43:26 -- json_config/json_config.sh@130 -- # kill -0 1728653 00:05:33.153 11:43:26 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:33.724 11:43:27 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:33.724 11:43:27 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:33.724 11:43:27 -- json_config/json_config.sh@130 -- # kill -0 1728653 00:05:33.724 11:43:27 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:33.724 11:43:27 -- json_config/json_config.sh@132 -- # break 00:05:33.724 11:43:27 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:33.724 11:43:27 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:33.724 SPDK target shutdown done 00:05:33.724 11:43:27 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:33.724 INFO: relaunching applications... 00:05:33.724 11:43:27 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.724 11:43:27 -- json_config/json_config.sh@98 -- # local app=target 00:05:33.724 11:43:27 -- json_config/json_config.sh@99 -- # shift 00:05:33.724 11:43:27 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:33.724 11:43:27 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:33.724 11:43:27 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:33.724 11:43:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:33.724 11:43:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:33.724 11:43:27 -- json_config/json_config.sh@111 -- # app_pid[$app]=1729681 00:05:33.724 11:43:27 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:33.724 Waiting for target to run... 00:05:33.724 11:43:27 -- json_config/json_config.sh@114 -- # waitforlisten 1729681 /var/tmp/spdk_tgt.sock 00:05:33.724 11:43:27 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.724 11:43:27 -- common/autotest_common.sh@819 -- # '[' -z 1729681 ']' 00:05:33.724 11:43:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.724 11:43:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.724 11:43:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.724 11:43:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.724 11:43:27 -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 [2024-06-10 11:43:27.439310] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:33.724 [2024-06-10 11:43:27.439373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729681 ] 00:05:33.724 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.294 [2024-06-10 11:43:27.761785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.294 [2024-06-10 11:43:27.811543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.294 [2024-06-10 11:43:27.811660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.553 [2024-06-10 11:43:28.296524] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.812 [2024-06-10 11:43:28.328927] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:35.384 11:43:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.384 11:43:28 -- common/autotest_common.sh@852 -- # return 0 00:05:35.384 11:43:28 -- json_config/json_config.sh@115 -- # echo '' 00:05:35.384 00:05:35.384 11:43:28 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:35.384 11:43:28 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:35.384 INFO: Checking if target configuration is the same... 00:05:35.384 11:43:28 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.384 11:43:28 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:35.384 11:43:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.384 + '[' 2 -ne 2 ']' 00:05:35.384 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:35.384 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:35.384 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:35.384 +++ basename /dev/fd/62 00:05:35.384 ++ mktemp /tmp/62.XXX 00:05:35.384 + tmp_file_1=/tmp/62.rbe 00:05:35.384 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.384 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:35.384 + tmp_file_2=/tmp/spdk_tgt_config.json.taV 00:05:35.384 + ret=0 00:05:35.384 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:35.384 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:35.644 + diff -u /tmp/62.rbe /tmp/spdk_tgt_config.json.taV 00:05:35.644 + echo 'INFO: JSON config files are the same' 00:05:35.644 INFO: JSON config files are the same 00:05:35.644 + rm /tmp/62.rbe /tmp/spdk_tgt_config.json.taV 00:05:35.644 + exit 0 00:05:35.644 11:43:29 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:35.644 11:43:29 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:35.644 INFO: changing configuration and checking if this can be detected... 00:05:35.644 11:43:29 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:35.644 11:43:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:35.644 11:43:29 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.644 11:43:29 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:35.644 11:43:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.644 + '[' 2 -ne 2 ']' 00:05:35.644 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:35.644 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:35.644 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:35.644 +++ basename /dev/fd/62 00:05:35.644 ++ mktemp /tmp/62.XXX 00:05:35.644 + tmp_file_1=/tmp/62.KuD 00:05:35.644 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.644 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:35.644 + tmp_file_2=/tmp/spdk_tgt_config.json.3cv 00:05:35.644 + ret=0 00:05:35.644 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:35.905 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:36.165 + diff -u /tmp/62.KuD /tmp/spdk_tgt_config.json.3cv 00:05:36.165 + ret=1 00:05:36.165 + echo '=== Start of file: /tmp/62.KuD ===' 00:05:36.165 + cat /tmp/62.KuD 00:05:36.165 + echo '=== End of file: /tmp/62.KuD ===' 00:05:36.165 + echo '' 00:05:36.165 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3cv ===' 00:05:36.165 + cat /tmp/spdk_tgt_config.json.3cv 00:05:36.165 + echo '=== End of file: /tmp/spdk_tgt_config.json.3cv ===' 00:05:36.165 + echo '' 00:05:36.165 + rm /tmp/62.KuD /tmp/spdk_tgt_config.json.3cv 00:05:36.165 + exit 1 00:05:36.165 11:43:29 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:36.165 INFO: configuration change detected. 00:05:36.165 11:43:29 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:36.165 11:43:29 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:36.165 11:43:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:36.165 11:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:36.165 11:43:29 -- json_config/json_config.sh@360 -- # local ret=0 00:05:36.165 11:43:29 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:36.165 11:43:29 -- json_config/json_config.sh@370 -- # [[ -n 1729681 ]] 00:05:36.165 11:43:29 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:36.165 11:43:29 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:36.165 11:43:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:36.165 11:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:36.165 11:43:29 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:36.165 11:43:29 -- json_config/json_config.sh@246 -- # uname -s 00:05:36.165 11:43:29 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:36.165 11:43:29 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:36.165 11:43:29 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:36.165 11:43:29 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:36.165 11:43:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:36.165 11:43:29 -- common/autotest_common.sh@10 -- # set +x 00:05:36.165 11:43:29 -- json_config/json_config.sh@376 -- # killprocess 1729681 00:05:36.165 11:43:29 -- common/autotest_common.sh@926 -- # '[' -z 1729681 ']' 00:05:36.165 11:43:29 -- common/autotest_common.sh@930 -- # kill -0 1729681 00:05:36.165 11:43:29 -- common/autotest_common.sh@931 -- # uname 00:05:36.165 11:43:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.166 11:43:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1729681 00:05:36.166 11:43:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:36.166 11:43:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:36.166 11:43:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1729681' 00:05:36.166 killing process with pid 1729681 00:05:36.166 11:43:29 -- common/autotest_common.sh@945 -- # kill 1729681 00:05:36.166 11:43:29 -- common/autotest_common.sh@950 -- # wait 1729681 00:05:36.426 11:43:30 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.426 11:43:30 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:36.426 11:43:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:36.426 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:36.426 11:43:30 -- json_config/json_config.sh@381 -- # return 0 00:05:36.426 11:43:30 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:36.426 INFO: Success 00:05:36.426 00:05:36.426 real 0m7.307s 00:05:36.426 user 0m8.727s 00:05:36.426 sys 0m1.847s 00:05:36.426 11:43:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.426 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:36.426 ************************************ 00:05:36.426 END TEST json_config 00:05:36.426 ************************************ 00:05:36.426 11:43:30 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:36.426 11:43:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.426 11:43:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.426 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:36.426 ************************************ 00:05:36.426 START TEST json_config_extra_key 00:05:36.426 ************************************ 00:05:36.426 11:43:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.689 11:43:30 -- nvmf/common.sh@7 -- # uname -s 00:05:36.689 11:43:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.689 11:43:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.689 11:43:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.689 11:43:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.689 11:43:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.689 11:43:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.689 11:43:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.689 11:43:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.689 11:43:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.689 11:43:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.689 11:43:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:36.689 11:43:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:36.689 11:43:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.689 11:43:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.689 11:43:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.689 11:43:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.689 11:43:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.689 11:43:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.689 11:43:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.689 11:43:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.689 11:43:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.689 11:43:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.689 11:43:30 -- paths/export.sh@5 -- # export PATH 00:05:36.689 11:43:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.689 11:43:30 -- nvmf/common.sh@46 -- # : 0 00:05:36.689 11:43:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:36.689 11:43:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:36.689 11:43:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:36.689 11:43:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.689 11:43:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.689 11:43:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:36.689 11:43:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:36.689 11:43:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:36.689 INFO: launching applications... 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1730262 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:36.689 Waiting for target to run... 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1730262 /var/tmp/spdk_tgt.sock 00:05:36.689 11:43:30 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:36.689 11:43:30 -- common/autotest_common.sh@819 -- # '[' -z 1730262 ']' 00:05:36.689 11:43:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.689 11:43:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.689 11:43:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.689 11:43:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.689 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:36.689 [2024-06-10 11:43:30.328315] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:36.689 [2024-06-10 11:43:30.328384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730262 ] 00:05:36.689 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.950 [2024-06-10 11:43:30.579599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.950 [2024-06-10 11:43:30.631607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.950 [2024-06-10 11:43:30.631728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.521 11:43:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.521 11:43:31 -- common/autotest_common.sh@852 -- # return 0 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:37.521 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:37.521 INFO: shutting down applications... 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1730262 ]] 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1730262 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1730262 00:05:37.521 11:43:31 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:38.093 11:43:31 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:38.093 11:43:31 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:38.093 11:43:31 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1730262 00:05:38.093 11:43:31 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:38.093 11:43:31 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:38.093 11:43:31 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:38.093 11:43:31 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:38.093 SPDK target shutdown done 00:05:38.093 11:43:31 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:38.093 Success 00:05:38.093 00:05:38.093 real 0m1.407s 00:05:38.093 user 0m1.061s 00:05:38.093 sys 0m0.339s 00:05:38.093 11:43:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.093 11:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.093 ************************************ 00:05:38.093 END TEST json_config_extra_key 00:05:38.093 ************************************ 00:05:38.093 11:43:31 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.093 11:43:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.093 11:43:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.093 11:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.093 ************************************ 00:05:38.093 START TEST alias_rpc 00:05:38.093 ************************************ 00:05:38.093 11:43:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.093 * Looking for test storage... 00:05:38.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:38.093 11:43:31 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.093 11:43:31 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1730643 00:05:38.093 11:43:31 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1730643 00:05:38.093 11:43:31 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.093 11:43:31 -- common/autotest_common.sh@819 -- # '[' -z 1730643 ']' 00:05:38.093 11:43:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.093 11:43:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.093 11:43:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.093 11:43:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.093 11:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.093 [2024-06-10 11:43:31.769464] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:38.093 [2024-06-10 11:43:31.769525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730643 ] 00:05:38.093 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.093 [2024-06-10 11:43:31.831731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.353 [2024-06-10 11:43:31.897061] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.353 [2024-06-10 11:43:31.897196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.922 11:43:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.922 11:43:32 -- common/autotest_common.sh@852 -- # return 0 00:05:38.922 11:43:32 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:39.183 11:43:32 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1730643 00:05:39.183 11:43:32 -- common/autotest_common.sh@926 -- # '[' -z 1730643 ']' 00:05:39.183 11:43:32 -- common/autotest_common.sh@930 -- # kill -0 1730643 00:05:39.183 11:43:32 -- common/autotest_common.sh@931 -- # uname 00:05:39.183 11:43:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:39.183 11:43:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1730643 00:05:39.183 11:43:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:39.183 11:43:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:39.183 11:43:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1730643' 00:05:39.183 killing process with pid 1730643 00:05:39.183 11:43:32 -- common/autotest_common.sh@945 -- # kill 1730643 00:05:39.183 11:43:32 -- common/autotest_common.sh@950 -- # wait 1730643 00:05:39.444 00:05:39.444 real 0m1.364s 00:05:39.444 user 0m1.510s 00:05:39.444 sys 0m0.360s 00:05:39.444 11:43:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.444 11:43:32 -- common/autotest_common.sh@10 -- # set +x 00:05:39.444 ************************************ 00:05:39.444 END TEST alias_rpc 00:05:39.444 ************************************ 00:05:39.444 11:43:33 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:39.444 11:43:33 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:39.444 11:43:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.444 11:43:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.444 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:39.444 ************************************ 00:05:39.444 START TEST spdkcli_tcp 00:05:39.444 ************************************ 00:05:39.444 11:43:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:39.444 * Looking for test storage... 00:05:39.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:39.444 11:43:33 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:39.444 11:43:33 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:39.444 11:43:33 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:39.444 11:43:33 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.444 11:43:33 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.444 11:43:33 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.444 11:43:33 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.444 11:43:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.444 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:39.444 11:43:33 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1731030 00:05:39.444 11:43:33 -- spdkcli/tcp.sh@27 -- # waitforlisten 1731030 00:05:39.444 11:43:33 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.444 11:43:33 -- common/autotest_common.sh@819 -- # '[' -z 1731030 ']' 00:05:39.444 11:43:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.444 11:43:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.444 11:43:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.444 11:43:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.444 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:39.444 [2024-06-10 11:43:33.182898] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:39.444 [2024-06-10 11:43:33.182960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731030 ] 00:05:39.444 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.705 [2024-06-10 11:43:33.244255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.705 [2024-06-10 11:43:33.310311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.705 [2024-06-10 11:43:33.310554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.705 [2024-06-10 11:43:33.310555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.277 11:43:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.277 11:43:33 -- common/autotest_common.sh@852 -- # return 0 00:05:40.277 11:43:33 -- spdkcli/tcp.sh@31 -- # socat_pid=1731203 00:05:40.277 11:43:33 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.277 11:43:33 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:40.539 [ 00:05:40.539 "bdev_malloc_delete", 00:05:40.539 "bdev_malloc_create", 00:05:40.539 "bdev_null_resize", 00:05:40.539 "bdev_null_delete", 00:05:40.539 "bdev_null_create", 00:05:40.539 "bdev_nvme_cuse_unregister", 00:05:40.539 "bdev_nvme_cuse_register", 00:05:40.539 "bdev_opal_new_user", 00:05:40.539 "bdev_opal_set_lock_state", 00:05:40.539 "bdev_opal_delete", 00:05:40.539 "bdev_opal_get_info", 00:05:40.539 "bdev_opal_create", 00:05:40.539 "bdev_nvme_opal_revert", 00:05:40.539 "bdev_nvme_opal_init", 00:05:40.539 "bdev_nvme_send_cmd", 00:05:40.539 "bdev_nvme_get_path_iostat", 00:05:40.539 "bdev_nvme_get_mdns_discovery_info", 00:05:40.539 "bdev_nvme_stop_mdns_discovery", 00:05:40.539 "bdev_nvme_start_mdns_discovery", 00:05:40.539 "bdev_nvme_set_multipath_policy", 00:05:40.539 "bdev_nvme_set_preferred_path", 00:05:40.539 "bdev_nvme_get_io_paths", 00:05:40.539 "bdev_nvme_remove_error_injection", 00:05:40.539 "bdev_nvme_add_error_injection", 00:05:40.539 "bdev_nvme_get_discovery_info", 00:05:40.539 "bdev_nvme_stop_discovery", 00:05:40.539 "bdev_nvme_start_discovery", 00:05:40.539 "bdev_nvme_get_controller_health_info", 00:05:40.539 "bdev_nvme_disable_controller", 00:05:40.539 "bdev_nvme_enable_controller", 00:05:40.539 "bdev_nvme_reset_controller", 00:05:40.539 "bdev_nvme_get_transport_statistics", 00:05:40.539 "bdev_nvme_apply_firmware", 00:05:40.539 "bdev_nvme_detach_controller", 00:05:40.539 "bdev_nvme_get_controllers", 00:05:40.539 "bdev_nvme_attach_controller", 00:05:40.539 "bdev_nvme_set_hotplug", 00:05:40.539 "bdev_nvme_set_options", 00:05:40.539 "bdev_passthru_delete", 00:05:40.539 "bdev_passthru_create", 00:05:40.539 "bdev_lvol_grow_lvstore", 00:05:40.539 "bdev_lvol_get_lvols", 00:05:40.539 "bdev_lvol_get_lvstores", 00:05:40.539 "bdev_lvol_delete", 00:05:40.539 "bdev_lvol_set_read_only", 00:05:40.539 "bdev_lvol_resize", 00:05:40.539 "bdev_lvol_decouple_parent", 00:05:40.539 "bdev_lvol_inflate", 00:05:40.539 "bdev_lvol_rename", 00:05:40.539 "bdev_lvol_clone_bdev", 00:05:40.539 "bdev_lvol_clone", 00:05:40.539 "bdev_lvol_snapshot", 00:05:40.539 "bdev_lvol_create", 00:05:40.539 "bdev_lvol_delete_lvstore", 00:05:40.539 "bdev_lvol_rename_lvstore", 00:05:40.539 "bdev_lvol_create_lvstore", 00:05:40.539 "bdev_raid_set_options", 00:05:40.539 "bdev_raid_remove_base_bdev", 00:05:40.539 "bdev_raid_add_base_bdev", 00:05:40.539 "bdev_raid_delete", 00:05:40.539 "bdev_raid_create", 00:05:40.539 "bdev_raid_get_bdevs", 00:05:40.539 "bdev_error_inject_error", 00:05:40.539 "bdev_error_delete", 00:05:40.539 "bdev_error_create", 00:05:40.539 "bdev_split_delete", 00:05:40.539 "bdev_split_create", 00:05:40.539 "bdev_delay_delete", 00:05:40.539 "bdev_delay_create", 00:05:40.539 "bdev_delay_update_latency", 00:05:40.539 "bdev_zone_block_delete", 00:05:40.539 "bdev_zone_block_create", 00:05:40.539 "blobfs_create", 00:05:40.539 "blobfs_detect", 00:05:40.539 "blobfs_set_cache_size", 00:05:40.539 "bdev_aio_delete", 00:05:40.539 "bdev_aio_rescan", 00:05:40.539 "bdev_aio_create", 00:05:40.539 "bdev_ftl_set_property", 00:05:40.539 "bdev_ftl_get_properties", 00:05:40.539 "bdev_ftl_get_stats", 00:05:40.539 "bdev_ftl_unmap", 00:05:40.539 "bdev_ftl_unload", 00:05:40.539 "bdev_ftl_delete", 00:05:40.539 "bdev_ftl_load", 00:05:40.539 "bdev_ftl_create", 00:05:40.539 "bdev_virtio_attach_controller", 00:05:40.539 "bdev_virtio_scsi_get_devices", 00:05:40.539 "bdev_virtio_detach_controller", 00:05:40.539 "bdev_virtio_blk_set_hotplug", 00:05:40.539 "bdev_iscsi_delete", 00:05:40.539 "bdev_iscsi_create", 00:05:40.539 "bdev_iscsi_set_options", 00:05:40.539 "accel_error_inject_error", 00:05:40.539 "ioat_scan_accel_module", 00:05:40.539 "dsa_scan_accel_module", 00:05:40.539 "iaa_scan_accel_module", 00:05:40.539 "iscsi_set_options", 00:05:40.539 "iscsi_get_auth_groups", 00:05:40.539 "iscsi_auth_group_remove_secret", 00:05:40.539 "iscsi_auth_group_add_secret", 00:05:40.539 "iscsi_delete_auth_group", 00:05:40.539 "iscsi_create_auth_group", 00:05:40.539 "iscsi_set_discovery_auth", 00:05:40.539 "iscsi_get_options", 00:05:40.539 "iscsi_target_node_request_logout", 00:05:40.539 "iscsi_target_node_set_redirect", 00:05:40.539 "iscsi_target_node_set_auth", 00:05:40.539 "iscsi_target_node_add_lun", 00:05:40.539 "iscsi_get_connections", 00:05:40.539 "iscsi_portal_group_set_auth", 00:05:40.539 "iscsi_start_portal_group", 00:05:40.539 "iscsi_delete_portal_group", 00:05:40.539 "iscsi_create_portal_group", 00:05:40.539 "iscsi_get_portal_groups", 00:05:40.539 "iscsi_delete_target_node", 00:05:40.539 "iscsi_target_node_remove_pg_ig_maps", 00:05:40.539 "iscsi_target_node_add_pg_ig_maps", 00:05:40.539 "iscsi_create_target_node", 00:05:40.539 "iscsi_get_target_nodes", 00:05:40.539 "iscsi_delete_initiator_group", 00:05:40.539 "iscsi_initiator_group_remove_initiators", 00:05:40.539 "iscsi_initiator_group_add_initiators", 00:05:40.539 "iscsi_create_initiator_group", 00:05:40.539 "iscsi_get_initiator_groups", 00:05:40.539 "nvmf_set_crdt", 00:05:40.540 "nvmf_set_config", 00:05:40.540 "nvmf_set_max_subsystems", 00:05:40.540 "nvmf_subsystem_get_listeners", 00:05:40.540 "nvmf_subsystem_get_qpairs", 00:05:40.540 "nvmf_subsystem_get_controllers", 00:05:40.540 "nvmf_get_stats", 00:05:40.540 "nvmf_get_transports", 00:05:40.540 "nvmf_create_transport", 00:05:40.540 "nvmf_get_targets", 00:05:40.540 "nvmf_delete_target", 00:05:40.540 "nvmf_create_target", 00:05:40.540 "nvmf_subsystem_allow_any_host", 00:05:40.540 "nvmf_subsystem_remove_host", 00:05:40.540 "nvmf_subsystem_add_host", 00:05:40.540 "nvmf_subsystem_remove_ns", 00:05:40.540 "nvmf_subsystem_add_ns", 00:05:40.540 "nvmf_subsystem_listener_set_ana_state", 00:05:40.540 "nvmf_discovery_get_referrals", 00:05:40.540 "nvmf_discovery_remove_referral", 00:05:40.540 "nvmf_discovery_add_referral", 00:05:40.540 "nvmf_subsystem_remove_listener", 00:05:40.540 "nvmf_subsystem_add_listener", 00:05:40.540 "nvmf_delete_subsystem", 00:05:40.540 "nvmf_create_subsystem", 00:05:40.540 "nvmf_get_subsystems", 00:05:40.540 "env_dpdk_get_mem_stats", 00:05:40.540 "nbd_get_disks", 00:05:40.540 "nbd_stop_disk", 00:05:40.540 "nbd_start_disk", 00:05:40.540 "ublk_recover_disk", 00:05:40.540 "ublk_get_disks", 00:05:40.540 "ublk_stop_disk", 00:05:40.540 "ublk_start_disk", 00:05:40.540 "ublk_destroy_target", 00:05:40.540 "ublk_create_target", 00:05:40.540 "virtio_blk_create_transport", 00:05:40.540 "virtio_blk_get_transports", 00:05:40.540 "vhost_controller_set_coalescing", 00:05:40.540 "vhost_get_controllers", 00:05:40.540 "vhost_delete_controller", 00:05:40.540 "vhost_create_blk_controller", 00:05:40.540 "vhost_scsi_controller_remove_target", 00:05:40.540 "vhost_scsi_controller_add_target", 00:05:40.540 "vhost_start_scsi_controller", 00:05:40.540 "vhost_create_scsi_controller", 00:05:40.540 "thread_set_cpumask", 00:05:40.540 "framework_get_scheduler", 00:05:40.540 "framework_set_scheduler", 00:05:40.540 "framework_get_reactors", 00:05:40.540 "thread_get_io_channels", 00:05:40.540 "thread_get_pollers", 00:05:40.540 "thread_get_stats", 00:05:40.540 "framework_monitor_context_switch", 00:05:40.540 "spdk_kill_instance", 00:05:40.540 "log_enable_timestamps", 00:05:40.540 "log_get_flags", 00:05:40.540 "log_clear_flag", 00:05:40.540 "log_set_flag", 00:05:40.540 "log_get_level", 00:05:40.540 "log_set_level", 00:05:40.540 "log_get_print_level", 00:05:40.540 "log_set_print_level", 00:05:40.540 "framework_enable_cpumask_locks", 00:05:40.540 "framework_disable_cpumask_locks", 00:05:40.540 "framework_wait_init", 00:05:40.540 "framework_start_init", 00:05:40.540 "scsi_get_devices", 00:05:40.540 "bdev_get_histogram", 00:05:40.540 "bdev_enable_histogram", 00:05:40.540 "bdev_set_qos_limit", 00:05:40.540 "bdev_set_qd_sampling_period", 00:05:40.540 "bdev_get_bdevs", 00:05:40.540 "bdev_reset_iostat", 00:05:40.540 "bdev_get_iostat", 00:05:40.540 "bdev_examine", 00:05:40.540 "bdev_wait_for_examine", 00:05:40.540 "bdev_set_options", 00:05:40.540 "notify_get_notifications", 00:05:40.540 "notify_get_types", 00:05:40.540 "accel_get_stats", 00:05:40.540 "accel_set_options", 00:05:40.540 "accel_set_driver", 00:05:40.540 "accel_crypto_key_destroy", 00:05:40.540 "accel_crypto_keys_get", 00:05:40.540 "accel_crypto_key_create", 00:05:40.540 "accel_assign_opc", 00:05:40.540 "accel_get_module_info", 00:05:40.540 "accel_get_opc_assignments", 00:05:40.540 "vmd_rescan", 00:05:40.540 "vmd_remove_device", 00:05:40.540 "vmd_enable", 00:05:40.540 "sock_set_default_impl", 00:05:40.540 "sock_impl_set_options", 00:05:40.540 "sock_impl_get_options", 00:05:40.540 "iobuf_get_stats", 00:05:40.540 "iobuf_set_options", 00:05:40.540 "framework_get_pci_devices", 00:05:40.540 "framework_get_config", 00:05:40.540 "framework_get_subsystems", 00:05:40.540 "trace_get_info", 00:05:40.540 "trace_get_tpoint_group_mask", 00:05:40.540 "trace_disable_tpoint_group", 00:05:40.540 "trace_enable_tpoint_group", 00:05:40.540 "trace_clear_tpoint_mask", 00:05:40.540 "trace_set_tpoint_mask", 00:05:40.540 "spdk_get_version", 00:05:40.540 "rpc_get_methods" 00:05:40.540 ] 00:05:40.540 11:43:34 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:40.540 11:43:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.540 11:43:34 -- common/autotest_common.sh@10 -- # set +x 00:05:40.540 11:43:34 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:40.540 11:43:34 -- spdkcli/tcp.sh@38 -- # killprocess 1731030 00:05:40.540 11:43:34 -- common/autotest_common.sh@926 -- # '[' -z 1731030 ']' 00:05:40.540 11:43:34 -- common/autotest_common.sh@930 -- # kill -0 1731030 00:05:40.540 11:43:34 -- common/autotest_common.sh@931 -- # uname 00:05:40.540 11:43:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:40.540 11:43:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1731030 00:05:40.540 11:43:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:40.540 11:43:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:40.540 11:43:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1731030' 00:05:40.540 killing process with pid 1731030 00:05:40.540 11:43:34 -- common/autotest_common.sh@945 -- # kill 1731030 00:05:40.540 11:43:34 -- common/autotest_common.sh@950 -- # wait 1731030 00:05:40.801 00:05:40.801 real 0m1.368s 00:05:40.801 user 0m2.554s 00:05:40.801 sys 0m0.376s 00:05:40.801 11:43:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.801 11:43:34 -- common/autotest_common.sh@10 -- # set +x 00:05:40.801 ************************************ 00:05:40.801 END TEST spdkcli_tcp 00:05:40.801 ************************************ 00:05:40.801 11:43:34 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:40.801 11:43:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.801 11:43:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.802 11:43:34 -- common/autotest_common.sh@10 -- # set +x 00:05:40.802 ************************************ 00:05:40.802 START TEST dpdk_mem_utility 00:05:40.802 ************************************ 00:05:40.802 11:43:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:40.802 * Looking for test storage... 00:05:40.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:40.802 11:43:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:40.802 11:43:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1731433 00:05:40.802 11:43:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1731433 00:05:40.802 11:43:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.802 11:43:34 -- common/autotest_common.sh@819 -- # '[' -z 1731433 ']' 00:05:40.802 11:43:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.802 11:43:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.802 11:43:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.802 11:43:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.802 11:43:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.062 [2024-06-10 11:43:34.586628] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:41.062 [2024-06-10 11:43:34.586706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731433 ] 00:05:41.062 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.062 [2024-06-10 11:43:34.653602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.062 [2024-06-10 11:43:34.724941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.062 [2024-06-10 11:43:34.725081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.634 11:43:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.634 11:43:35 -- common/autotest_common.sh@852 -- # return 0 00:05:41.634 11:43:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:41.634 11:43:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:41.634 11:43:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.634 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:41.634 { 00:05:41.634 "filename": "/tmp/spdk_mem_dump.txt" 00:05:41.634 } 00:05:41.634 11:43:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.634 11:43:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:41.895 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:41.895 1 heaps totaling size 814.000000 MiB 00:05:41.895 size: 814.000000 MiB heap id: 0 00:05:41.895 end heaps---------- 00:05:41.895 8 mempools totaling size 598.116089 MiB 00:05:41.896 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:41.896 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:41.896 size: 84.521057 MiB name: bdev_io_1731433 00:05:41.896 size: 51.011292 MiB name: evtpool_1731433 00:05:41.896 size: 50.003479 MiB name: msgpool_1731433 00:05:41.896 size: 21.763794 MiB name: PDU_Pool 00:05:41.896 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:41.896 size: 0.026123 MiB name: Session_Pool 00:05:41.896 end mempools------- 00:05:41.896 6 memzones totaling size 4.142822 MiB 00:05:41.896 size: 1.000366 MiB name: RG_ring_0_1731433 00:05:41.896 size: 1.000366 MiB name: RG_ring_1_1731433 00:05:41.896 size: 1.000366 MiB name: RG_ring_4_1731433 00:05:41.896 size: 1.000366 MiB name: RG_ring_5_1731433 00:05:41.896 size: 0.125366 MiB name: RG_ring_2_1731433 00:05:41.896 size: 0.015991 MiB name: RG_ring_3_1731433 00:05:41.896 end memzones------- 00:05:41.896 11:43:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:41.896 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:41.896 list of free elements. size: 12.519348 MiB 00:05:41.896 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:41.896 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:41.896 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:41.896 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:41.896 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:41.896 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:41.896 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:41.896 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:41.896 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:41.896 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:41.896 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:41.896 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:41.896 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:41.896 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:41.896 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:41.896 list of standard malloc elements. size: 199.218079 MiB 00:05:41.896 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:41.896 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:41.896 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:41.896 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:41.896 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:41.896 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:41.896 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:41.896 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:41.896 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:41.896 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:41.896 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:41.896 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:41.896 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:41.896 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:41.896 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:41.896 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:41.896 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:41.896 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:41.896 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:41.896 list of memzone associated elements. size: 602.262573 MiB 00:05:41.896 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:41.896 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:41.896 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:41.896 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:41.896 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:41.896 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1731433_0 00:05:41.896 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:41.896 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1731433_0 00:05:41.896 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:41.896 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1731433_0 00:05:41.896 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:41.896 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:41.896 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:41.896 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:41.896 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:41.896 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1731433 00:05:41.896 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:41.896 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1731433 00:05:41.896 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:41.896 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1731433 00:05:41.896 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:41.896 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:41.896 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:41.896 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:41.896 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:41.896 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:41.896 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:41.896 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:41.896 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:41.896 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1731433 00:05:41.896 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:41.896 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1731433 00:05:41.896 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:41.896 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1731433 00:05:41.896 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:41.896 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1731433 00:05:41.896 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:41.896 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1731433 00:05:41.896 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:41.896 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:41.896 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:41.896 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:41.896 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:41.896 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:41.896 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:41.896 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1731433 00:05:41.896 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:41.896 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:41.896 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:41.896 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:41.896 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:41.896 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1731433 00:05:41.896 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:41.896 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:41.896 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:41.896 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1731433 00:05:41.896 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:41.896 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1731433 00:05:41.896 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:41.896 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:41.896 11:43:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:41.896 11:43:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1731433 00:05:41.896 11:43:35 -- common/autotest_common.sh@926 -- # '[' -z 1731433 ']' 00:05:41.896 11:43:35 -- common/autotest_common.sh@930 -- # kill -0 1731433 00:05:41.896 11:43:35 -- common/autotest_common.sh@931 -- # uname 00:05:41.896 11:43:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.896 11:43:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1731433 00:05:41.896 11:43:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:41.896 11:43:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:41.896 11:43:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1731433' 00:05:41.896 killing process with pid 1731433 00:05:41.896 11:43:35 -- common/autotest_common.sh@945 -- # kill 1731433 00:05:41.897 11:43:35 -- common/autotest_common.sh@950 -- # wait 1731433 00:05:42.158 00:05:42.158 real 0m1.280s 00:05:42.158 user 0m1.358s 00:05:42.158 sys 0m0.365s 00:05:42.158 11:43:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.158 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:42.158 ************************************ 00:05:42.158 END TEST dpdk_mem_utility 00:05:42.158 ************************************ 00:05:42.158 11:43:35 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:42.158 11:43:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.158 11:43:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.158 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:42.158 ************************************ 00:05:42.158 START TEST event 00:05:42.158 ************************************ 00:05:42.158 11:43:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:42.158 * Looking for test storage... 00:05:42.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:42.158 11:43:35 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:42.158 11:43:35 -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.158 11:43:35 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.158 11:43:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:42.158 11:43:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.158 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:42.158 ************************************ 00:05:42.158 START TEST event_perf 00:05:42.158 ************************************ 00:05:42.158 11:43:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.158 Running I/O for 1 seconds...[2024-06-10 11:43:35.881655] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:42.158 [2024-06-10 11:43:35.881771] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731799 ] 00:05:42.158 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.419 [2024-06-10 11:43:35.951273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.419 [2024-06-10 11:43:36.022111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.419 [2024-06-10 11:43:36.022228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.419 [2024-06-10 11:43:36.022384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.419 [2024-06-10 11:43:36.022492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.361 Running I/O for 1 seconds... 00:05:43.361 lcore 0: 172578 00:05:43.361 lcore 1: 172578 00:05:43.361 lcore 2: 172577 00:05:43.361 lcore 3: 172579 00:05:43.361 done. 00:05:43.361 00:05:43.361 real 0m1.216s 00:05:43.361 user 0m4.131s 00:05:43.361 sys 0m0.083s 00:05:43.361 11:43:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.361 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:05:43.361 ************************************ 00:05:43.361 END TEST event_perf 00:05:43.361 ************************************ 00:05:43.361 11:43:37 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:43.361 11:43:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:43.361 11:43:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.361 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:05:43.361 ************************************ 00:05:43.361 START TEST event_reactor 00:05:43.361 ************************************ 00:05:43.361 11:43:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:43.622 [2024-06-10 11:43:37.141658] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:43.622 [2024-06-10 11:43:37.141754] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731930 ] 00:05:43.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.622 [2024-06-10 11:43:37.207543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.622 [2024-06-10 11:43:37.271864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.564 test_start 00:05:44.564 oneshot 00:05:44.564 tick 100 00:05:44.564 tick 100 00:05:44.564 tick 250 00:05:44.564 tick 100 00:05:44.564 tick 100 00:05:44.564 tick 100 00:05:44.564 tick 250 00:05:44.564 tick 500 00:05:44.564 tick 100 00:05:44.564 tick 100 00:05:44.564 tick 250 00:05:44.564 tick 100 00:05:44.564 tick 100 00:05:44.564 test_end 00:05:44.564 00:05:44.564 real 0m1.205s 00:05:44.564 user 0m1.130s 00:05:44.564 sys 0m0.071s 00:05:44.564 11:43:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.564 11:43:38 -- common/autotest_common.sh@10 -- # set +x 00:05:44.564 ************************************ 00:05:44.564 END TEST event_reactor 00:05:44.564 ************************************ 00:05:44.825 11:43:38 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.825 11:43:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:44.825 11:43:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.825 11:43:38 -- common/autotest_common.sh@10 -- # set +x 00:05:44.825 ************************************ 00:05:44.825 START TEST event_reactor_perf 00:05:44.825 ************************************ 00:05:44.825 11:43:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.825 [2024-06-10 11:43:38.388462] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:44.825 [2024-06-10 11:43:38.388560] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1732212 ] 00:05:44.825 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.825 [2024-06-10 11:43:38.453025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.825 [2024-06-10 11:43:38.515805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.209 test_start 00:05:46.209 test_end 00:05:46.209 Performance: 365410 events per second 00:05:46.209 00:05:46.209 real 0m1.200s 00:05:46.209 user 0m1.126s 00:05:46.209 sys 0m0.070s 00:05:46.209 11:43:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.209 11:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 ************************************ 00:05:46.209 END TEST event_reactor_perf 00:05:46.209 ************************************ 00:05:46.209 11:43:39 -- event/event.sh@49 -- # uname -s 00:05:46.209 11:43:39 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:46.209 11:43:39 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:46.209 11:43:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.210 11:43:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.210 11:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.210 ************************************ 00:05:46.210 START TEST event_scheduler 00:05:46.210 ************************************ 00:05:46.210 11:43:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:46.210 * Looking for test storage... 00:05:46.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:46.210 11:43:39 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:46.210 11:43:39 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1732593 00:05:46.210 11:43:39 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.210 11:43:39 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:46.210 11:43:39 -- scheduler/scheduler.sh@37 -- # waitforlisten 1732593 00:05:46.210 11:43:39 -- common/autotest_common.sh@819 -- # '[' -z 1732593 ']' 00:05:46.210 11:43:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.210 11:43:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.210 11:43:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.210 11:43:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.210 11:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.210 [2024-06-10 11:43:39.751237] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:46.210 [2024-06-10 11:43:39.751311] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1732593 ] 00:05:46.210 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.210 [2024-06-10 11:43:39.805338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.210 [2024-06-10 11:43:39.862478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.210 [2024-06-10 11:43:39.862754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.210 [2024-06-10 11:43:39.862907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.210 [2024-06-10 11:43:39.862907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.781 11:43:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.781 11:43:40 -- common/autotest_common.sh@852 -- # return 0 00:05:46.781 11:43:40 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:46.781 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.781 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:46.781 POWER: Env isn't set yet! 00:05:46.781 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:46.781 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.781 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.781 POWER: Attempting to initialise PSTAT power management... 00:05:46.781 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:46.781 POWER: Initialized successfully for lcore 0 power management 00:05:47.042 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:47.043 POWER: Initialized successfully for lcore 1 power management 00:05:47.043 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:47.043 POWER: Initialized successfully for lcore 2 power management 00:05:47.043 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:47.043 POWER: Initialized successfully for lcore 3 power management 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 [2024-06-10 11:43:40.648498] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:47.043 11:43:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.043 11:43:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 ************************************ 00:05:47.043 START TEST scheduler_create_thread 00:05:47.043 ************************************ 00:05:47.043 11:43:40 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 2 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 3 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 4 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 5 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 6 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 7 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 8 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 9 00:05:47.043 11:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.043 11:43:40 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:47.043 11:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.043 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.429 10 00:05:48.429 11:43:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.429 11:43:41 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:48.429 11:43:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.429 11:43:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.814 11:43:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.814 11:43:43 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.814 11:43:43 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.814 11:43:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.814 11:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:50.386 11:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.386 11:43:44 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:50.386 11:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.386 11:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:51.327 11:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.327 11:43:44 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:51.327 11:43:44 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:51.327 11:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.327 11:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:51.896 11:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.896 00:05:51.896 real 0m4.897s 00:05:51.896 user 0m0.025s 00:05:51.896 sys 0m0.006s 00:05:51.896 11:43:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.896 11:43:45 -- common/autotest_common.sh@10 -- # set +x 00:05:51.896 ************************************ 00:05:51.896 END TEST scheduler_create_thread 00:05:51.896 ************************************ 00:05:51.896 11:43:45 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:51.896 11:43:45 -- scheduler/scheduler.sh@46 -- # killprocess 1732593 00:05:51.896 11:43:45 -- common/autotest_common.sh@926 -- # '[' -z 1732593 ']' 00:05:51.896 11:43:45 -- common/autotest_common.sh@930 -- # kill -0 1732593 00:05:51.896 11:43:45 -- common/autotest_common.sh@931 -- # uname 00:05:51.896 11:43:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.896 11:43:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1732593 00:05:51.896 11:43:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:51.896 11:43:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:51.896 11:43:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1732593' 00:05:51.896 killing process with pid 1732593 00:05:51.896 11:43:45 -- common/autotest_common.sh@945 -- # kill 1732593 00:05:51.896 11:43:45 -- common/autotest_common.sh@950 -- # wait 1732593 00:05:52.467 [2024-06-10 11:43:45.934842] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.467 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:52.467 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:52.467 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:52.467 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:52.467 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:52.467 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:52.467 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:52.467 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:52.467 00:05:52.467 real 0m6.462s 00:05:52.467 user 0m15.545s 00:05:52.467 sys 0m0.321s 00:05:52.467 11:43:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.467 11:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:52.467 ************************************ 00:05:52.467 END TEST event_scheduler 00:05:52.467 ************************************ 00:05:52.467 11:43:46 -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.467 11:43:46 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.467 11:43:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.467 11:43:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.467 11:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:52.467 ************************************ 00:05:52.467 START TEST app_repeat 00:05:52.467 ************************************ 00:05:52.467 11:43:46 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:52.467 11:43:46 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.467 11:43:46 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.467 11:43:46 -- event/event.sh@13 -- # local nbd_list 00:05:52.467 11:43:46 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.467 11:43:46 -- event/event.sh@14 -- # local bdev_list 00:05:52.467 11:43:46 -- event/event.sh@15 -- # local repeat_times=4 00:05:52.467 11:43:46 -- event/event.sh@17 -- # modprobe nbd 00:05:52.467 11:43:46 -- event/event.sh@19 -- # repeat_pid=1733996 00:05:52.467 11:43:46 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.467 11:43:46 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.467 11:43:46 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1733996' 00:05:52.467 Process app_repeat pid: 1733996 00:05:52.467 11:43:46 -- event/event.sh@23 -- # for i in {0..2} 00:05:52.467 11:43:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.467 spdk_app_start Round 0 00:05:52.467 11:43:46 -- event/event.sh@25 -- # waitforlisten 1733996 /var/tmp/spdk-nbd.sock 00:05:52.467 11:43:46 -- common/autotest_common.sh@819 -- # '[' -z 1733996 ']' 00:05:52.467 11:43:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.467 11:43:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.467 11:43:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.467 11:43:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.467 11:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:52.467 [2024-06-10 11:43:46.163659] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:52.467 [2024-06-10 11:43:46.163797] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733996 ] 00:05:52.467 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.467 [2024-06-10 11:43:46.238842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.728 [2024-06-10 11:43:46.302324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.728 [2024-06-10 11:43:46.302493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.333 11:43:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.333 11:43:46 -- common/autotest_common.sh@852 -- # return 0 00:05:53.333 11:43:46 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.333 Malloc0 00:05:53.333 11:43:47 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.603 Malloc1 00:05:53.603 11:43:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.603 11:43:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@12 -- # local i 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.604 11:43:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.865 /dev/nbd0 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.865 11:43:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:53.865 11:43:47 -- common/autotest_common.sh@857 -- # local i 00:05:53.865 11:43:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:53.865 11:43:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:53.865 11:43:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:53.865 11:43:47 -- common/autotest_common.sh@861 -- # break 00:05:53.865 11:43:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:53.865 11:43:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:53.865 11:43:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.865 1+0 records in 00:05:53.865 1+0 records out 00:05:53.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024081 s, 17.0 MB/s 00:05:53.865 11:43:47 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.865 11:43:47 -- common/autotest_common.sh@874 -- # size=4096 00:05:53.865 11:43:47 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.865 11:43:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:53.865 11:43:47 -- common/autotest_common.sh@877 -- # return 0 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.865 /dev/nbd1 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.865 11:43:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:53.865 11:43:47 -- common/autotest_common.sh@857 -- # local i 00:05:53.865 11:43:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:53.865 11:43:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:53.865 11:43:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:53.865 11:43:47 -- common/autotest_common.sh@861 -- # break 00:05:53.865 11:43:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:53.865 11:43:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:53.865 11:43:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.865 1+0 records in 00:05:53.865 1+0 records out 00:05:53.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275545 s, 14.9 MB/s 00:05:53.865 11:43:47 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.865 11:43:47 -- common/autotest_common.sh@874 -- # size=4096 00:05:53.865 11:43:47 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.865 11:43:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:53.865 11:43:47 -- common/autotest_common.sh@877 -- # return 0 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.865 11:43:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.126 { 00:05:54.126 "nbd_device": "/dev/nbd0", 00:05:54.126 "bdev_name": "Malloc0" 00:05:54.126 }, 00:05:54.126 { 00:05:54.126 "nbd_device": "/dev/nbd1", 00:05:54.126 "bdev_name": "Malloc1" 00:05:54.126 } 00:05:54.126 ]' 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.126 { 00:05:54.126 "nbd_device": "/dev/nbd0", 00:05:54.126 "bdev_name": "Malloc0" 00:05:54.126 }, 00:05:54.126 { 00:05:54.126 "nbd_device": "/dev/nbd1", 00:05:54.126 "bdev_name": "Malloc1" 00:05:54.126 } 00:05:54.126 ]' 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.126 /dev/nbd1' 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.126 /dev/nbd1' 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.126 256+0 records in 00:05:54.126 256+0 records out 00:05:54.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121905 s, 86.0 MB/s 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.126 11:43:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.126 256+0 records in 00:05:54.126 256+0 records out 00:05:54.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161158 s, 65.1 MB/s 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.127 256+0 records in 00:05:54.127 256+0 records out 00:05:54.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171168 s, 61.3 MB/s 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@51 -- # local i 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.127 11:43:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@41 -- # break 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.387 11:43:48 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@41 -- # break 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@65 -- # true 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.647 11:43:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.648 11:43:48 -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.648 11:43:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.648 11:43:48 -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.648 11:43:48 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.908 11:43:48 -- event/event.sh@35 -- # sleep 3 00:05:55.169 [2024-06-10 11:43:48.698590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.169 [2024-06-10 11:43:48.760024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.169 [2024-06-10 11:43:48.760028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.169 [2024-06-10 11:43:48.791527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.169 [2024-06-10 11:43:48.791560] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.468 11:43:51 -- event/event.sh@23 -- # for i in {0..2} 00:05:58.468 11:43:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:58.468 spdk_app_start Round 1 00:05:58.468 11:43:51 -- event/event.sh@25 -- # waitforlisten 1733996 /var/tmp/spdk-nbd.sock 00:05:58.468 11:43:51 -- common/autotest_common.sh@819 -- # '[' -z 1733996 ']' 00:05:58.468 11:43:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.468 11:43:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.468 11:43:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.468 11:43:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.468 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 11:43:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.468 11:43:51 -- common/autotest_common.sh@852 -- # return 0 00:05:58.468 11:43:51 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.468 Malloc0 00:05:58.468 11:43:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.468 Malloc1 00:05:58.468 11:43:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@12 -- # local i 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.468 /dev/nbd0 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.468 11:43:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:58.468 11:43:52 -- common/autotest_common.sh@857 -- # local i 00:05:58.468 11:43:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:58.468 11:43:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:58.468 11:43:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:58.468 11:43:52 -- common/autotest_common.sh@861 -- # break 00:05:58.468 11:43:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:58.468 11:43:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:58.468 11:43:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.468 1+0 records in 00:05:58.468 1+0 records out 00:05:58.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270473 s, 15.1 MB/s 00:05:58.468 11:43:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.468 11:43:52 -- common/autotest_common.sh@874 -- # size=4096 00:05:58.468 11:43:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.468 11:43:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:58.468 11:43:52 -- common/autotest_common.sh@877 -- # return 0 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.468 11:43:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.730 /dev/nbd1 00:05:58.730 11:43:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.730 11:43:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.730 11:43:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:58.730 11:43:52 -- common/autotest_common.sh@857 -- # local i 00:05:58.730 11:43:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:58.730 11:43:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:58.730 11:43:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:58.730 11:43:52 -- common/autotest_common.sh@861 -- # break 00:05:58.730 11:43:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:58.730 11:43:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:58.730 11:43:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.730 1+0 records in 00:05:58.730 1+0 records out 00:05:58.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286066 s, 14.3 MB/s 00:05:58.730 11:43:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.730 11:43:52 -- common/autotest_common.sh@874 -- # size=4096 00:05:58.730 11:43:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.730 11:43:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:58.730 11:43:52 -- common/autotest_common.sh@877 -- # return 0 00:05:58.730 11:43:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.730 11:43:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.730 11:43:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.730 11:43:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.730 11:43:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.990 { 00:05:58.990 "nbd_device": "/dev/nbd0", 00:05:58.990 "bdev_name": "Malloc0" 00:05:58.990 }, 00:05:58.990 { 00:05:58.990 "nbd_device": "/dev/nbd1", 00:05:58.990 "bdev_name": "Malloc1" 00:05:58.990 } 00:05:58.990 ]' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.990 { 00:05:58.990 "nbd_device": "/dev/nbd0", 00:05:58.990 "bdev_name": "Malloc0" 00:05:58.990 }, 00:05:58.990 { 00:05:58.990 "nbd_device": "/dev/nbd1", 00:05:58.990 "bdev_name": "Malloc1" 00:05:58.990 } 00:05:58.990 ]' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.990 /dev/nbd1' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.990 /dev/nbd1' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.990 256+0 records in 00:05:58.990 256+0 records out 00:05:58.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116453 s, 90.0 MB/s 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.990 256+0 records in 00:05:58.990 256+0 records out 00:05:58.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158316 s, 66.2 MB/s 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.990 256+0 records in 00:05:58.990 256+0 records out 00:05:58.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017304 s, 60.6 MB/s 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@51 -- # local i 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.990 11:43:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.250 11:43:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.250 11:43:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.250 11:43:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.250 11:43:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.250 11:43:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.250 11:43:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.250 11:43:52 -- bdev/nbd_common.sh@41 -- # break 00:05:59.250 11:43:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.250 11:43:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@41 -- # break 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.251 11:43:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@65 -- # true 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.511 11:43:53 -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.511 11:43:53 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.772 11:43:53 -- event/event.sh@35 -- # sleep 3 00:05:59.772 [2024-06-10 11:43:53.467114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.772 [2024-06-10 11:43:53.527901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.772 [2024-06-10 11:43:53.527903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.032 [2024-06-10 11:43:53.559306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.032 [2024-06-10 11:43:53.559353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.576 11:43:56 -- event/event.sh@23 -- # for i in {0..2} 00:06:02.576 11:43:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:02.576 spdk_app_start Round 2 00:06:02.576 11:43:56 -- event/event.sh@25 -- # waitforlisten 1733996 /var/tmp/spdk-nbd.sock 00:06:02.576 11:43:56 -- common/autotest_common.sh@819 -- # '[' -z 1733996 ']' 00:06:02.576 11:43:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.576 11:43:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.576 11:43:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.576 11:43:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.576 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:02.837 11:43:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.837 11:43:56 -- common/autotest_common.sh@852 -- # return 0 00:06:02.837 11:43:56 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.097 Malloc0 00:06:03.097 11:43:56 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.097 Malloc1 00:06:03.097 11:43:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@12 -- # local i 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.097 11:43:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.358 /dev/nbd0 00:06:03.358 11:43:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.358 11:43:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.358 11:43:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:03.358 11:43:56 -- common/autotest_common.sh@857 -- # local i 00:06:03.358 11:43:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.358 11:43:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.358 11:43:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:03.358 11:43:56 -- common/autotest_common.sh@861 -- # break 00:06:03.358 11:43:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.358 11:43:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.358 11:43:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.358 1+0 records in 00:06:03.358 1+0 records out 00:06:03.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196622 s, 20.8 MB/s 00:06:03.358 11:43:56 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.358 11:43:56 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.358 11:43:56 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.358 11:43:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.358 11:43:56 -- common/autotest_common.sh@877 -- # return 0 00:06:03.358 11:43:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.358 11:43:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.358 11:43:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.358 /dev/nbd1 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.619 11:43:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:03.619 11:43:57 -- common/autotest_common.sh@857 -- # local i 00:06:03.619 11:43:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.619 11:43:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.619 11:43:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:03.619 11:43:57 -- common/autotest_common.sh@861 -- # break 00:06:03.619 11:43:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.619 11:43:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.619 11:43:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.619 1+0 records in 00:06:03.619 1+0 records out 00:06:03.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272017 s, 15.1 MB/s 00:06:03.619 11:43:57 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.619 11:43:57 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.619 11:43:57 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.619 11:43:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.619 11:43:57 -- common/autotest_common.sh@877 -- # return 0 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.619 { 00:06:03.619 "nbd_device": "/dev/nbd0", 00:06:03.619 "bdev_name": "Malloc0" 00:06:03.619 }, 00:06:03.619 { 00:06:03.619 "nbd_device": "/dev/nbd1", 00:06:03.619 "bdev_name": "Malloc1" 00:06:03.619 } 00:06:03.619 ]' 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.619 { 00:06:03.619 "nbd_device": "/dev/nbd0", 00:06:03.619 "bdev_name": "Malloc0" 00:06:03.619 }, 00:06:03.619 { 00:06:03.619 "nbd_device": "/dev/nbd1", 00:06:03.619 "bdev_name": "Malloc1" 00:06:03.619 } 00:06:03.619 ]' 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.619 /dev/nbd1' 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.619 /dev/nbd1' 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.619 11:43:57 -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.620 256+0 records in 00:06:03.620 256+0 records out 00:06:03.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125 s, 83.9 MB/s 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.620 11:43:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.881 256+0 records in 00:06:03.881 256+0 records out 00:06:03.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159888 s, 65.6 MB/s 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.881 256+0 records in 00:06:03.881 256+0 records out 00:06:03.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169766 s, 61.8 MB/s 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@51 -- # local i 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@41 -- # break 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.881 11:43:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.141 11:43:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@41 -- # break 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.142 11:43:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@65 -- # true 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.402 11:43:57 -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.402 11:43:57 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.402 11:43:58 -- event/event.sh@35 -- # sleep 3 00:06:04.663 [2024-06-10 11:43:58.262913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.663 [2024-06-10 11:43:58.323983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.663 [2024-06-10 11:43:58.323985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.663 [2024-06-10 11:43:58.355393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.663 [2024-06-10 11:43:58.355426] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.964 11:44:01 -- event/event.sh@38 -- # waitforlisten 1733996 /var/tmp/spdk-nbd.sock 00:06:07.964 11:44:01 -- common/autotest_common.sh@819 -- # '[' -z 1733996 ']' 00:06:07.964 11:44:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.964 11:44:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.964 11:44:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.964 11:44:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.964 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:06:07.964 11:44:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.964 11:44:01 -- common/autotest_common.sh@852 -- # return 0 00:06:07.964 11:44:01 -- event/event.sh@39 -- # killprocess 1733996 00:06:07.964 11:44:01 -- common/autotest_common.sh@926 -- # '[' -z 1733996 ']' 00:06:07.964 11:44:01 -- common/autotest_common.sh@930 -- # kill -0 1733996 00:06:07.964 11:44:01 -- common/autotest_common.sh@931 -- # uname 00:06:07.964 11:44:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:07.964 11:44:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1733996 00:06:07.964 11:44:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:07.964 11:44:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:07.964 11:44:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1733996' 00:06:07.964 killing process with pid 1733996 00:06:07.964 11:44:01 -- common/autotest_common.sh@945 -- # kill 1733996 00:06:07.964 11:44:01 -- common/autotest_common.sh@950 -- # wait 1733996 00:06:07.964 spdk_app_start is called in Round 0. 00:06:07.964 Shutdown signal received, stop current app iteration 00:06:07.964 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:07.964 spdk_app_start is called in Round 1. 00:06:07.964 Shutdown signal received, stop current app iteration 00:06:07.964 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:07.964 spdk_app_start is called in Round 2. 00:06:07.964 Shutdown signal received, stop current app iteration 00:06:07.964 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:07.964 spdk_app_start is called in Round 3. 00:06:07.964 Shutdown signal received, stop current app iteration 00:06:07.964 11:44:01 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:07.964 11:44:01 -- event/event.sh@42 -- # return 0 00:06:07.964 00:06:07.964 real 0m15.326s 00:06:07.964 user 0m33.014s 00:06:07.964 sys 0m2.054s 00:06:07.964 11:44:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.964 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:06:07.964 ************************************ 00:06:07.964 END TEST app_repeat 00:06:07.964 ************************************ 00:06:07.964 11:44:01 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:07.964 11:44:01 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.964 11:44:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.964 11:44:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.964 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:06:07.964 ************************************ 00:06:07.964 START TEST cpu_locks 00:06:07.964 ************************************ 00:06:07.964 11:44:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.965 * Looking for test storage... 00:06:07.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:07.965 11:44:01 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:07.965 11:44:01 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:07.965 11:44:01 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:07.965 11:44:01 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:07.965 11:44:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.965 11:44:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.965 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:06:07.965 ************************************ 00:06:07.965 START TEST default_locks 00:06:07.965 ************************************ 00:06:07.965 11:44:01 -- common/autotest_common.sh@1104 -- # default_locks 00:06:07.965 11:44:01 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1737282 00:06:07.965 11:44:01 -- event/cpu_locks.sh@47 -- # waitforlisten 1737282 00:06:07.965 11:44:01 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.965 11:44:01 -- common/autotest_common.sh@819 -- # '[' -z 1737282 ']' 00:06:07.965 11:44:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.965 11:44:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.965 11:44:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.965 11:44:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.965 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:06:07.965 [2024-06-10 11:44:01.651338] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:07.965 [2024-06-10 11:44:01.651394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737282 ] 00:06:07.965 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.965 [2024-06-10 11:44:01.713850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.225 [2024-06-10 11:44:01.776809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.225 [2024-06-10 11:44:01.776946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.795 11:44:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.795 11:44:02 -- common/autotest_common.sh@852 -- # return 0 00:06:08.795 11:44:02 -- event/cpu_locks.sh@49 -- # locks_exist 1737282 00:06:08.795 11:44:02 -- event/cpu_locks.sh@22 -- # lslocks -p 1737282 00:06:08.795 11:44:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.367 lslocks: write error 00:06:09.367 11:44:02 -- event/cpu_locks.sh@50 -- # killprocess 1737282 00:06:09.367 11:44:02 -- common/autotest_common.sh@926 -- # '[' -z 1737282 ']' 00:06:09.367 11:44:02 -- common/autotest_common.sh@930 -- # kill -0 1737282 00:06:09.367 11:44:02 -- common/autotest_common.sh@931 -- # uname 00:06:09.367 11:44:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:09.367 11:44:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1737282 00:06:09.367 11:44:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:09.367 11:44:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:09.367 11:44:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1737282' 00:06:09.367 killing process with pid 1737282 00:06:09.367 11:44:02 -- common/autotest_common.sh@945 -- # kill 1737282 00:06:09.367 11:44:02 -- common/autotest_common.sh@950 -- # wait 1737282 00:06:09.367 11:44:03 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1737282 00:06:09.367 11:44:03 -- common/autotest_common.sh@640 -- # local es=0 00:06:09.367 11:44:03 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1737282 00:06:09.367 11:44:03 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:09.367 11:44:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.367 11:44:03 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:09.367 11:44:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.367 11:44:03 -- common/autotest_common.sh@643 -- # waitforlisten 1737282 00:06:09.367 11:44:03 -- common/autotest_common.sh@819 -- # '[' -z 1737282 ']' 00:06:09.367 11:44:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.367 11:44:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.367 11:44:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.367 11:44:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.367 11:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:09.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1737282) - No such process 00:06:09.367 ERROR: process (pid: 1737282) is no longer running 00:06:09.367 11:44:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.367 11:44:03 -- common/autotest_common.sh@852 -- # return 1 00:06:09.367 11:44:03 -- common/autotest_common.sh@643 -- # es=1 00:06:09.367 11:44:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:09.367 11:44:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:09.367 11:44:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:09.368 11:44:03 -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.368 11:44:03 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.368 11:44:03 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.368 11:44:03 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.368 00:06:09.368 real 0m1.535s 00:06:09.368 user 0m1.633s 00:06:09.368 sys 0m0.503s 00:06:09.368 11:44:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.368 11:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:09.368 ************************************ 00:06:09.368 END TEST default_locks 00:06:09.368 ************************************ 00:06:09.629 11:44:03 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.629 11:44:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.629 11:44:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.629 11:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:09.629 ************************************ 00:06:09.629 START TEST default_locks_via_rpc 00:06:09.629 ************************************ 00:06:09.629 11:44:03 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:09.629 11:44:03 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1737647 00:06:09.629 11:44:03 -- event/cpu_locks.sh@63 -- # waitforlisten 1737647 00:06:09.629 11:44:03 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.629 11:44:03 -- common/autotest_common.sh@819 -- # '[' -z 1737647 ']' 00:06:09.629 11:44:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.629 11:44:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.629 11:44:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.629 11:44:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.629 11:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:09.629 [2024-06-10 11:44:03.238680] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:09.629 [2024-06-10 11:44:03.238736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737647 ] 00:06:09.629 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.629 [2024-06-10 11:44:03.299366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.629 [2024-06-10 11:44:03.363083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.629 [2024-06-10 11:44:03.363218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.572 11:44:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:10.572 11:44:03 -- common/autotest_common.sh@852 -- # return 0 00:06:10.572 11:44:03 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.572 11:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:10.572 11:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:10.572 11:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:10.572 11:44:03 -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.572 11:44:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.572 11:44:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.572 11:44:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.572 11:44:04 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.572 11:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:10.572 11:44:04 -- common/autotest_common.sh@10 -- # set +x 00:06:10.572 11:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:10.572 11:44:04 -- event/cpu_locks.sh@71 -- # locks_exist 1737647 00:06:10.572 11:44:04 -- event/cpu_locks.sh@22 -- # lslocks -p 1737647 00:06:10.572 11:44:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.832 11:44:04 -- event/cpu_locks.sh@73 -- # killprocess 1737647 00:06:10.832 11:44:04 -- common/autotest_common.sh@926 -- # '[' -z 1737647 ']' 00:06:10.832 11:44:04 -- common/autotest_common.sh@930 -- # kill -0 1737647 00:06:10.832 11:44:04 -- common/autotest_common.sh@931 -- # uname 00:06:10.832 11:44:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:10.832 11:44:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1737647 00:06:10.832 11:44:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:10.832 11:44:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:10.832 11:44:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1737647' 00:06:10.832 killing process with pid 1737647 00:06:10.832 11:44:04 -- common/autotest_common.sh@945 -- # kill 1737647 00:06:10.832 11:44:04 -- common/autotest_common.sh@950 -- # wait 1737647 00:06:11.093 00:06:11.093 real 0m1.491s 00:06:11.093 user 0m1.584s 00:06:11.093 sys 0m0.484s 00:06:11.093 11:44:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.093 11:44:04 -- common/autotest_common.sh@10 -- # set +x 00:06:11.093 ************************************ 00:06:11.093 END TEST default_locks_via_rpc 00:06:11.093 ************************************ 00:06:11.093 11:44:04 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:11.093 11:44:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.093 11:44:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.093 11:44:04 -- common/autotest_common.sh@10 -- # set +x 00:06:11.093 ************************************ 00:06:11.093 START TEST non_locking_app_on_locked_coremask 00:06:11.093 ************************************ 00:06:11.093 11:44:04 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:11.093 11:44:04 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1738018 00:06:11.093 11:44:04 -- event/cpu_locks.sh@81 -- # waitforlisten 1738018 /var/tmp/spdk.sock 00:06:11.093 11:44:04 -- common/autotest_common.sh@819 -- # '[' -z 1738018 ']' 00:06:11.093 11:44:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.093 11:44:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.093 11:44:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.093 11:44:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.093 11:44:04 -- common/autotest_common.sh@10 -- # set +x 00:06:11.093 11:44:04 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.093 [2024-06-10 11:44:04.756001] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:11.093 [2024-06-10 11:44:04.756060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738018 ] 00:06:11.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.093 [2024-06-10 11:44:04.815539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.374 [2024-06-10 11:44:04.879842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.374 [2024-06-10 11:44:04.879966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.946 11:44:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.946 11:44:05 -- common/autotest_common.sh@852 -- # return 0 00:06:11.946 11:44:05 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1738116 00:06:11.946 11:44:05 -- event/cpu_locks.sh@85 -- # waitforlisten 1738116 /var/tmp/spdk2.sock 00:06:11.946 11:44:05 -- common/autotest_common.sh@819 -- # '[' -z 1738116 ']' 00:06:11.946 11:44:05 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:11.946 11:44:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.946 11:44:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.946 11:44:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.946 11:44:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.946 11:44:05 -- common/autotest_common.sh@10 -- # set +x 00:06:11.946 [2024-06-10 11:44:05.555932] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:11.946 [2024-06-10 11:44:05.555985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738116 ] 00:06:11.946 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.946 [2024-06-10 11:44:05.644644] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.946 [2024-06-10 11:44:05.644671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.211 [2024-06-10 11:44:05.772275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.211 [2024-06-10 11:44:05.772400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.783 11:44:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.783 11:44:06 -- common/autotest_common.sh@852 -- # return 0 00:06:12.783 11:44:06 -- event/cpu_locks.sh@87 -- # locks_exist 1738018 00:06:12.783 11:44:06 -- event/cpu_locks.sh@22 -- # lslocks -p 1738018 00:06:12.783 11:44:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.354 lslocks: write error 00:06:13.354 11:44:06 -- event/cpu_locks.sh@89 -- # killprocess 1738018 00:06:13.354 11:44:06 -- common/autotest_common.sh@926 -- # '[' -z 1738018 ']' 00:06:13.354 11:44:06 -- common/autotest_common.sh@930 -- # kill -0 1738018 00:06:13.354 11:44:06 -- common/autotest_common.sh@931 -- # uname 00:06:13.354 11:44:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:13.354 11:44:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1738018 00:06:13.354 11:44:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:13.354 11:44:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:13.354 11:44:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1738018' 00:06:13.354 killing process with pid 1738018 00:06:13.354 11:44:06 -- common/autotest_common.sh@945 -- # kill 1738018 00:06:13.354 11:44:06 -- common/autotest_common.sh@950 -- # wait 1738018 00:06:13.614 11:44:07 -- event/cpu_locks.sh@90 -- # killprocess 1738116 00:06:13.614 11:44:07 -- common/autotest_common.sh@926 -- # '[' -z 1738116 ']' 00:06:13.614 11:44:07 -- common/autotest_common.sh@930 -- # kill -0 1738116 00:06:13.614 11:44:07 -- common/autotest_common.sh@931 -- # uname 00:06:13.614 11:44:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:13.614 11:44:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1738116 00:06:13.614 11:44:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:13.614 11:44:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:13.614 11:44:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1738116' 00:06:13.614 killing process with pid 1738116 00:06:13.614 11:44:07 -- common/autotest_common.sh@945 -- # kill 1738116 00:06:13.614 11:44:07 -- common/autotest_common.sh@950 -- # wait 1738116 00:06:13.874 00:06:13.874 real 0m2.871s 00:06:13.874 user 0m3.120s 00:06:13.874 sys 0m0.868s 00:06:13.874 11:44:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.874 11:44:07 -- common/autotest_common.sh@10 -- # set +x 00:06:13.874 ************************************ 00:06:13.874 END TEST non_locking_app_on_locked_coremask 00:06:13.874 ************************************ 00:06:13.874 11:44:07 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:13.874 11:44:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.874 11:44:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.874 11:44:07 -- common/autotest_common.sh@10 -- # set +x 00:06:13.874 ************************************ 00:06:13.874 START TEST locking_app_on_unlocked_coremask 00:06:13.874 ************************************ 00:06:13.874 11:44:07 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:13.874 11:44:07 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1738729 00:06:13.874 11:44:07 -- event/cpu_locks.sh@99 -- # waitforlisten 1738729 /var/tmp/spdk.sock 00:06:13.874 11:44:07 -- common/autotest_common.sh@819 -- # '[' -z 1738729 ']' 00:06:13.874 11:44:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.874 11:44:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.874 11:44:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.874 11:44:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.874 11:44:07 -- common/autotest_common.sh@10 -- # set +x 00:06:13.875 11:44:07 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.135 [2024-06-10 11:44:07.666740] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:14.135 [2024-06-10 11:44:07.666798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738729 ] 00:06:14.135 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.135 [2024-06-10 11:44:07.725233] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.135 [2024-06-10 11:44:07.725278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.135 [2024-06-10 11:44:07.787825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.135 [2024-06-10 11:44:07.787948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.706 11:44:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.706 11:44:08 -- common/autotest_common.sh@852 -- # return 0 00:06:14.706 11:44:08 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1738742 00:06:14.706 11:44:08 -- event/cpu_locks.sh@103 -- # waitforlisten 1738742 /var/tmp/spdk2.sock 00:06:14.706 11:44:08 -- common/autotest_common.sh@819 -- # '[' -z 1738742 ']' 00:06:14.706 11:44:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.706 11:44:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.706 11:44:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.706 11:44:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.706 11:44:08 -- common/autotest_common.sh@10 -- # set +x 00:06:14.706 11:44:08 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.706 [2024-06-10 11:44:08.460791] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:14.706 [2024-06-10 11:44:08.460841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738742 ] 00:06:14.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.966 [2024-06-10 11:44:08.548440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.966 [2024-06-10 11:44:08.675029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.966 [2024-06-10 11:44:08.675155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.537 11:44:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.537 11:44:09 -- common/autotest_common.sh@852 -- # return 0 00:06:15.537 11:44:09 -- event/cpu_locks.sh@105 -- # locks_exist 1738742 00:06:15.537 11:44:09 -- event/cpu_locks.sh@22 -- # lslocks -p 1738742 00:06:15.537 11:44:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.108 lslocks: write error 00:06:16.108 11:44:09 -- event/cpu_locks.sh@107 -- # killprocess 1738729 00:06:16.108 11:44:09 -- common/autotest_common.sh@926 -- # '[' -z 1738729 ']' 00:06:16.108 11:44:09 -- common/autotest_common.sh@930 -- # kill -0 1738729 00:06:16.108 11:44:09 -- common/autotest_common.sh@931 -- # uname 00:06:16.108 11:44:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.108 11:44:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1738729 00:06:16.108 11:44:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:16.108 11:44:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:16.108 11:44:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1738729' 00:06:16.108 killing process with pid 1738729 00:06:16.108 11:44:09 -- common/autotest_common.sh@945 -- # kill 1738729 00:06:16.108 11:44:09 -- common/autotest_common.sh@950 -- # wait 1738729 00:06:16.677 11:44:10 -- event/cpu_locks.sh@108 -- # killprocess 1738742 00:06:16.677 11:44:10 -- common/autotest_common.sh@926 -- # '[' -z 1738742 ']' 00:06:16.677 11:44:10 -- common/autotest_common.sh@930 -- # kill -0 1738742 00:06:16.677 11:44:10 -- common/autotest_common.sh@931 -- # uname 00:06:16.677 11:44:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.677 11:44:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1738742 00:06:16.677 11:44:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:16.677 11:44:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:16.677 11:44:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1738742' 00:06:16.677 killing process with pid 1738742 00:06:16.677 11:44:10 -- common/autotest_common.sh@945 -- # kill 1738742 00:06:16.677 11:44:10 -- common/autotest_common.sh@950 -- # wait 1738742 00:06:16.939 00:06:16.939 real 0m2.914s 00:06:16.939 user 0m3.157s 00:06:16.939 sys 0m0.874s 00:06:16.939 11:44:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.939 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:06:16.939 ************************************ 00:06:16.939 END TEST locking_app_on_unlocked_coremask 00:06:16.939 ************************************ 00:06:16.939 11:44:10 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.939 11:44:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.939 11:44:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.939 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:06:16.939 ************************************ 00:06:16.939 START TEST locking_app_on_locked_coremask 00:06:16.939 ************************************ 00:06:16.939 11:44:10 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:16.939 11:44:10 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1739235 00:06:16.939 11:44:10 -- event/cpu_locks.sh@116 -- # waitforlisten 1739235 /var/tmp/spdk.sock 00:06:16.939 11:44:10 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.939 11:44:10 -- common/autotest_common.sh@819 -- # '[' -z 1739235 ']' 00:06:16.939 11:44:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.939 11:44:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.939 11:44:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.939 11:44:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.939 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:06:16.939 [2024-06-10 11:44:10.635510] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:16.939 [2024-06-10 11:44:10.635575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739235 ] 00:06:16.939 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.939 [2024-06-10 11:44:10.699350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.200 [2024-06-10 11:44:10.769910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.200 [2024-06-10 11:44:10.770059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.771 11:44:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.771 11:44:11 -- common/autotest_common.sh@852 -- # return 0 00:06:17.771 11:44:11 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1739458 00:06:17.771 11:44:11 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1739458 /var/tmp/spdk2.sock 00:06:17.771 11:44:11 -- common/autotest_common.sh@640 -- # local es=0 00:06:17.771 11:44:11 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.771 11:44:11 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1739458 /var/tmp/spdk2.sock 00:06:17.771 11:44:11 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:17.771 11:44:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.771 11:44:11 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:17.771 11:44:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.771 11:44:11 -- common/autotest_common.sh@643 -- # waitforlisten 1739458 /var/tmp/spdk2.sock 00:06:17.771 11:44:11 -- common/autotest_common.sh@819 -- # '[' -z 1739458 ']' 00:06:17.771 11:44:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.771 11:44:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.771 11:44:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.771 11:44:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.771 11:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:17.771 [2024-06-10 11:44:11.447362] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:17.771 [2024-06-10 11:44:11.447414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739458 ] 00:06:17.771 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.771 [2024-06-10 11:44:11.537880] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1739235 has claimed it. 00:06:17.771 [2024-06-10 11:44:11.537923] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1739458) - No such process 00:06:18.342 ERROR: process (pid: 1739458) is no longer running 00:06:18.342 11:44:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.342 11:44:12 -- common/autotest_common.sh@852 -- # return 1 00:06:18.342 11:44:12 -- common/autotest_common.sh@643 -- # es=1 00:06:18.342 11:44:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:18.342 11:44:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:18.342 11:44:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:18.342 11:44:12 -- event/cpu_locks.sh@122 -- # locks_exist 1739235 00:06:18.342 11:44:12 -- event/cpu_locks.sh@22 -- # lslocks -p 1739235 00:06:18.342 11:44:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.913 lslocks: write error 00:06:18.913 11:44:12 -- event/cpu_locks.sh@124 -- # killprocess 1739235 00:06:18.913 11:44:12 -- common/autotest_common.sh@926 -- # '[' -z 1739235 ']' 00:06:18.913 11:44:12 -- common/autotest_common.sh@930 -- # kill -0 1739235 00:06:18.913 11:44:12 -- common/autotest_common.sh@931 -- # uname 00:06:18.913 11:44:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:18.913 11:44:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1739235 00:06:18.913 11:44:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:18.913 11:44:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:18.913 11:44:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1739235' 00:06:18.913 killing process with pid 1739235 00:06:18.913 11:44:12 -- common/autotest_common.sh@945 -- # kill 1739235 00:06:18.913 11:44:12 -- common/autotest_common.sh@950 -- # wait 1739235 00:06:19.174 00:06:19.174 real 0m2.126s 00:06:19.174 user 0m2.359s 00:06:19.174 sys 0m0.578s 00:06:19.174 11:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.174 11:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:19.174 ************************************ 00:06:19.174 END TEST locking_app_on_locked_coremask 00:06:19.174 ************************************ 00:06:19.174 11:44:12 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:19.174 11:44:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.174 11:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.174 11:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:19.174 ************************************ 00:06:19.174 START TEST locking_overlapped_coremask 00:06:19.174 ************************************ 00:06:19.174 11:44:12 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:19.174 11:44:12 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1739820 00:06:19.174 11:44:12 -- event/cpu_locks.sh@133 -- # waitforlisten 1739820 /var/tmp/spdk.sock 00:06:19.174 11:44:12 -- common/autotest_common.sh@819 -- # '[' -z 1739820 ']' 00:06:19.174 11:44:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.174 11:44:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.174 11:44:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.174 11:44:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.174 11:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:19.174 11:44:12 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.174 [2024-06-10 11:44:12.792754] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:19.174 [2024-06-10 11:44:12.792810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739820 ] 00:06:19.174 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.174 [2024-06-10 11:44:12.852342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.174 [2024-06-10 11:44:12.917395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.174 [2024-06-10 11:44:12.917657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.174 [2024-06-10 11:44:12.917772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.174 [2024-06-10 11:44:12.917775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.119 11:44:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.119 11:44:13 -- common/autotest_common.sh@852 -- # return 0 00:06:20.119 11:44:13 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1739837 00:06:20.119 11:44:13 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1739837 /var/tmp/spdk2.sock 00:06:20.119 11:44:13 -- common/autotest_common.sh@640 -- # local es=0 00:06:20.119 11:44:13 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:20.119 11:44:13 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1739837 /var/tmp/spdk2.sock 00:06:20.119 11:44:13 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:20.119 11:44:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:20.119 11:44:13 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:20.119 11:44:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:20.120 11:44:13 -- common/autotest_common.sh@643 -- # waitforlisten 1739837 /var/tmp/spdk2.sock 00:06:20.120 11:44:13 -- common/autotest_common.sh@819 -- # '[' -z 1739837 ']' 00:06:20.120 11:44:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.120 11:44:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.120 11:44:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.120 11:44:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.120 11:44:13 -- common/autotest_common.sh@10 -- # set +x 00:06:20.120 [2024-06-10 11:44:13.603321] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:20.120 [2024-06-10 11:44:13.603373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739837 ] 00:06:20.120 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.120 [2024-06-10 11:44:13.674974] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1739820 has claimed it. 00:06:20.120 [2024-06-10 11:44:13.675003] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1739837) - No such process 00:06:20.690 ERROR: process (pid: 1739837) is no longer running 00:06:20.690 11:44:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.690 11:44:14 -- common/autotest_common.sh@852 -- # return 1 00:06:20.690 11:44:14 -- common/autotest_common.sh@643 -- # es=1 00:06:20.690 11:44:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:20.690 11:44:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:20.690 11:44:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:20.690 11:44:14 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.690 11:44:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.690 11:44:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.690 11:44:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.690 11:44:14 -- event/cpu_locks.sh@141 -- # killprocess 1739820 00:06:20.690 11:44:14 -- common/autotest_common.sh@926 -- # '[' -z 1739820 ']' 00:06:20.690 11:44:14 -- common/autotest_common.sh@930 -- # kill -0 1739820 00:06:20.690 11:44:14 -- common/autotest_common.sh@931 -- # uname 00:06:20.690 11:44:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.690 11:44:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1739820 00:06:20.690 11:44:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.690 11:44:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.690 11:44:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1739820' 00:06:20.690 killing process with pid 1739820 00:06:20.690 11:44:14 -- common/autotest_common.sh@945 -- # kill 1739820 00:06:20.690 11:44:14 -- common/autotest_common.sh@950 -- # wait 1739820 00:06:20.950 00:06:20.950 real 0m1.730s 00:06:20.950 user 0m4.921s 00:06:20.950 sys 0m0.348s 00:06:20.950 11:44:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.950 11:44:14 -- common/autotest_common.sh@10 -- # set +x 00:06:20.950 ************************************ 00:06:20.950 END TEST locking_overlapped_coremask 00:06:20.950 ************************************ 00:06:20.950 11:44:14 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:20.950 11:44:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.950 11:44:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.950 11:44:14 -- common/autotest_common.sh@10 -- # set +x 00:06:20.950 ************************************ 00:06:20.950 START TEST locking_overlapped_coremask_via_rpc 00:06:20.950 ************************************ 00:06:20.950 11:44:14 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:20.950 11:44:14 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1740199 00:06:20.950 11:44:14 -- event/cpu_locks.sh@149 -- # waitforlisten 1740199 /var/tmp/spdk.sock 00:06:20.950 11:44:14 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:20.950 11:44:14 -- common/autotest_common.sh@819 -- # '[' -z 1740199 ']' 00:06:20.950 11:44:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.950 11:44:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.951 11:44:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.951 11:44:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.951 11:44:14 -- common/autotest_common.sh@10 -- # set +x 00:06:20.951 [2024-06-10 11:44:14.572191] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:20.951 [2024-06-10 11:44:14.572287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740199 ] 00:06:20.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.951 [2024-06-10 11:44:14.636990] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.951 [2024-06-10 11:44:14.637023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.951 [2024-06-10 11:44:14.699128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.951 [2024-06-10 11:44:14.699377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.951 [2024-06-10 11:44:14.699573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.951 [2024-06-10 11:44:14.699577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.892 11:44:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.892 11:44:15 -- common/autotest_common.sh@852 -- # return 0 00:06:21.892 11:44:15 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1740213 00:06:21.892 11:44:15 -- event/cpu_locks.sh@153 -- # waitforlisten 1740213 /var/tmp/spdk2.sock 00:06:21.892 11:44:15 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.892 11:44:15 -- common/autotest_common.sh@819 -- # '[' -z 1740213 ']' 00:06:21.892 11:44:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.892 11:44:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.892 11:44:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.892 11:44:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.892 11:44:15 -- common/autotest_common.sh@10 -- # set +x 00:06:21.892 [2024-06-10 11:44:15.389342] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:21.892 [2024-06-10 11:44:15.389399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740213 ] 00:06:21.892 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.892 [2024-06-10 11:44:15.460595] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.892 [2024-06-10 11:44:15.460616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.892 [2024-06-10 11:44:15.564157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.892 [2024-06-10 11:44:15.564318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.892 [2024-06-10 11:44:15.568364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.892 [2024-06-10 11:44:15.568366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.464 11:44:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.464 11:44:16 -- common/autotest_common.sh@852 -- # return 0 00:06:22.464 11:44:16 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.464 11:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:22.464 11:44:16 -- common/autotest_common.sh@10 -- # set +x 00:06:22.464 11:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:22.464 11:44:16 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.464 11:44:16 -- common/autotest_common.sh@640 -- # local es=0 00:06:22.464 11:44:16 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.464 11:44:16 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:22.464 11:44:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:22.464 11:44:16 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:22.464 11:44:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:22.464 11:44:16 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.464 11:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:22.464 11:44:16 -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 [2024-06-10 11:44:16.147303] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1740199 has claimed it. 00:06:22.465 request: 00:06:22.465 { 00:06:22.465 "method": "framework_enable_cpumask_locks", 00:06:22.465 "req_id": 1 00:06:22.465 } 00:06:22.465 Got JSON-RPC error response 00:06:22.465 response: 00:06:22.465 { 00:06:22.465 "code": -32603, 00:06:22.465 "message": "Failed to claim CPU core: 2" 00:06:22.465 } 00:06:22.465 11:44:16 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:22.465 11:44:16 -- common/autotest_common.sh@643 -- # es=1 00:06:22.465 11:44:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:22.465 11:44:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:22.465 11:44:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:22.465 11:44:16 -- event/cpu_locks.sh@158 -- # waitforlisten 1740199 /var/tmp/spdk.sock 00:06:22.465 11:44:16 -- common/autotest_common.sh@819 -- # '[' -z 1740199 ']' 00:06:22.465 11:44:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.465 11:44:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.465 11:44:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.465 11:44:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.465 11:44:16 -- common/autotest_common.sh@10 -- # set +x 00:06:22.725 11:44:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.725 11:44:16 -- common/autotest_common.sh@852 -- # return 0 00:06:22.725 11:44:16 -- event/cpu_locks.sh@159 -- # waitforlisten 1740213 /var/tmp/spdk2.sock 00:06:22.725 11:44:16 -- common/autotest_common.sh@819 -- # '[' -z 1740213 ']' 00:06:22.725 11:44:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.725 11:44:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.725 11:44:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.725 11:44:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.725 11:44:16 -- common/autotest_common.sh@10 -- # set +x 00:06:22.725 11:44:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.725 11:44:16 -- common/autotest_common.sh@852 -- # return 0 00:06:22.725 11:44:16 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:22.725 11:44:16 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.725 11:44:16 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.725 11:44:16 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.725 00:06:22.725 real 0m1.956s 00:06:22.725 user 0m0.740s 00:06:22.725 sys 0m0.144s 00:06:22.725 11:44:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.725 11:44:16 -- common/autotest_common.sh@10 -- # set +x 00:06:22.725 ************************************ 00:06:22.725 END TEST locking_overlapped_coremask_via_rpc 00:06:22.725 ************************************ 00:06:22.985 11:44:16 -- event/cpu_locks.sh@174 -- # cleanup 00:06:22.985 11:44:16 -- event/cpu_locks.sh@15 -- # [[ -z 1740199 ]] 00:06:22.985 11:44:16 -- event/cpu_locks.sh@15 -- # killprocess 1740199 00:06:22.985 11:44:16 -- common/autotest_common.sh@926 -- # '[' -z 1740199 ']' 00:06:22.985 11:44:16 -- common/autotest_common.sh@930 -- # kill -0 1740199 00:06:22.985 11:44:16 -- common/autotest_common.sh@931 -- # uname 00:06:22.985 11:44:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:22.985 11:44:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1740199 00:06:22.985 11:44:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:22.985 11:44:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:22.985 11:44:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1740199' 00:06:22.985 killing process with pid 1740199 00:06:22.985 11:44:16 -- common/autotest_common.sh@945 -- # kill 1740199 00:06:22.985 11:44:16 -- common/autotest_common.sh@950 -- # wait 1740199 00:06:23.245 11:44:16 -- event/cpu_locks.sh@16 -- # [[ -z 1740213 ]] 00:06:23.245 11:44:16 -- event/cpu_locks.sh@16 -- # killprocess 1740213 00:06:23.245 11:44:16 -- common/autotest_common.sh@926 -- # '[' -z 1740213 ']' 00:06:23.245 11:44:16 -- common/autotest_common.sh@930 -- # kill -0 1740213 00:06:23.245 11:44:16 -- common/autotest_common.sh@931 -- # uname 00:06:23.245 11:44:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.245 11:44:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1740213 00:06:23.245 11:44:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:23.245 11:44:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:23.245 11:44:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1740213' 00:06:23.245 killing process with pid 1740213 00:06:23.245 11:44:16 -- common/autotest_common.sh@945 -- # kill 1740213 00:06:23.245 11:44:16 -- common/autotest_common.sh@950 -- # wait 1740213 00:06:23.507 11:44:17 -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.507 11:44:17 -- event/cpu_locks.sh@1 -- # cleanup 00:06:23.507 11:44:17 -- event/cpu_locks.sh@15 -- # [[ -z 1740199 ]] 00:06:23.507 11:44:17 -- event/cpu_locks.sh@15 -- # killprocess 1740199 00:06:23.507 11:44:17 -- common/autotest_common.sh@926 -- # '[' -z 1740199 ']' 00:06:23.507 11:44:17 -- common/autotest_common.sh@930 -- # kill -0 1740199 00:06:23.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1740199) - No such process 00:06:23.507 11:44:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1740199 is not found' 00:06:23.507 Process with pid 1740199 is not found 00:06:23.507 11:44:17 -- event/cpu_locks.sh@16 -- # [[ -z 1740213 ]] 00:06:23.507 11:44:17 -- event/cpu_locks.sh@16 -- # killprocess 1740213 00:06:23.507 11:44:17 -- common/autotest_common.sh@926 -- # '[' -z 1740213 ']' 00:06:23.507 11:44:17 -- common/autotest_common.sh@930 -- # kill -0 1740213 00:06:23.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1740213) - No such process 00:06:23.507 11:44:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1740213 is not found' 00:06:23.507 Process with pid 1740213 is not found 00:06:23.507 11:44:17 -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.507 00:06:23.507 real 0m15.533s 00:06:23.507 user 0m26.780s 00:06:23.507 sys 0m4.548s 00:06:23.507 11:44:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.507 11:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:23.507 ************************************ 00:06:23.507 END TEST cpu_locks 00:06:23.507 ************************************ 00:06:23.507 00:06:23.507 real 0m41.309s 00:06:23.507 user 1m21.852s 00:06:23.507 sys 0m7.432s 00:06:23.507 11:44:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.507 11:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:23.507 ************************************ 00:06:23.507 END TEST event 00:06:23.507 ************************************ 00:06:23.507 11:44:17 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:23.507 11:44:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.507 11:44:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.507 11:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:23.507 ************************************ 00:06:23.507 START TEST thread 00:06:23.507 ************************************ 00:06:23.507 11:44:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:23.507 * Looking for test storage... 00:06:23.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:23.507 11:44:17 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.507 11:44:17 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:23.507 11:44:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.507 11:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:23.507 ************************************ 00:06:23.507 START TEST thread_poller_perf 00:06:23.507 ************************************ 00:06:23.507 11:44:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.507 [2024-06-10 11:44:17.233372] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:23.507 [2024-06-10 11:44:17.233485] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740704 ] 00:06:23.507 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.768 [2024-06-10 11:44:17.299653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.768 [2024-06-10 11:44:17.365558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.768 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:24.773 ====================================== 00:06:24.773 busy:2413483220 (cyc) 00:06:24.773 total_run_count: 276000 00:06:24.773 tsc_hz: 2400000000 (cyc) 00:06:24.773 ====================================== 00:06:24.773 poller_cost: 8744 (cyc), 3643 (nsec) 00:06:24.773 00:06:24.773 real 0m1.217s 00:06:24.773 user 0m1.133s 00:06:24.773 sys 0m0.079s 00:06:24.773 11:44:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.773 11:44:18 -- common/autotest_common.sh@10 -- # set +x 00:06:24.773 ************************************ 00:06:24.773 END TEST thread_poller_perf 00:06:24.773 ************************************ 00:06:24.773 11:44:18 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.773 11:44:18 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:24.773 11:44:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.773 11:44:18 -- common/autotest_common.sh@10 -- # set +x 00:06:24.773 ************************************ 00:06:24.773 START TEST thread_poller_perf 00:06:24.773 ************************************ 00:06:24.773 11:44:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.773 [2024-06-10 11:44:18.494320] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:24.773 [2024-06-10 11:44:18.494404] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741004 ] 00:06:24.773 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.034 [2024-06-10 11:44:18.557149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.034 [2024-06-10 11:44:18.617743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.034 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.978 ====================================== 00:06:25.978 busy:2402641822 (cyc) 00:06:25.978 total_run_count: 3802000 00:06:25.978 tsc_hz: 2400000000 (cyc) 00:06:25.978 ====================================== 00:06:25.978 poller_cost: 631 (cyc), 262 (nsec) 00:06:25.978 00:06:25.978 real 0m1.199s 00:06:25.978 user 0m1.130s 00:06:25.978 sys 0m0.065s 00:06:25.978 11:44:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.978 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:06:25.978 ************************************ 00:06:25.978 END TEST thread_poller_perf 00:06:25.978 ************************************ 00:06:25.978 11:44:19 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:25.978 00:06:25.978 real 0m2.598s 00:06:25.978 user 0m2.340s 00:06:25.978 sys 0m0.270s 00:06:25.978 11:44:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.978 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:06:25.978 ************************************ 00:06:25.978 END TEST thread 00:06:25.978 ************************************ 00:06:25.978 11:44:19 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:25.978 11:44:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:25.978 11:44:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.978 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:06:25.978 ************************************ 00:06:25.978 START TEST accel 00:06:25.978 ************************************ 00:06:26.239 11:44:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:26.239 * Looking for test storage... 00:06:26.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:26.239 11:44:19 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:26.239 11:44:19 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:26.239 11:44:19 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:26.239 11:44:19 -- accel/accel.sh@59 -- # spdk_tgt_pid=1741398 00:06:26.239 11:44:19 -- accel/accel.sh@60 -- # waitforlisten 1741398 00:06:26.239 11:44:19 -- common/autotest_common.sh@819 -- # '[' -z 1741398 ']' 00:06:26.239 11:44:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.239 11:44:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.239 11:44:19 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:26.239 11:44:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.239 11:44:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.239 11:44:19 -- accel/accel.sh@58 -- # build_accel_config 00:06:26.239 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:06:26.239 11:44:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.239 11:44:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.239 11:44:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.239 11:44:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.239 11:44:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.239 11:44:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.239 11:44:19 -- accel/accel.sh@42 -- # jq -r . 00:06:26.239 [2024-06-10 11:44:19.888653] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:26.239 [2024-06-10 11:44:19.888720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741398 ] 00:06:26.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.239 [2024-06-10 11:44:19.952348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.500 [2024-06-10 11:44:20.025613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.500 [2024-06-10 11:44:20.025742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.070 11:44:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.070 11:44:20 -- common/autotest_common.sh@852 -- # return 0 00:06:27.070 11:44:20 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:27.070 11:44:20 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:27.070 11:44:20 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:27.070 11:44:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:27.070 11:44:20 -- common/autotest_common.sh@10 -- # set +x 00:06:27.070 11:44:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.070 11:44:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # IFS== 00:06:27.070 11:44:20 -- accel/accel.sh@64 -- # read -r opc module 00:06:27.070 11:44:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:27.071 11:44:20 -- accel/accel.sh@67 -- # killprocess 1741398 00:06:27.071 11:44:20 -- common/autotest_common.sh@926 -- # '[' -z 1741398 ']' 00:06:27.071 11:44:20 -- common/autotest_common.sh@930 -- # kill -0 1741398 00:06:27.071 11:44:20 -- common/autotest_common.sh@931 -- # uname 00:06:27.071 11:44:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.071 11:44:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1741398 00:06:27.071 11:44:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.071 11:44:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.071 11:44:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1741398' 00:06:27.071 killing process with pid 1741398 00:06:27.071 11:44:20 -- common/autotest_common.sh@945 -- # kill 1741398 00:06:27.071 11:44:20 -- common/autotest_common.sh@950 -- # wait 1741398 00:06:27.330 11:44:20 -- accel/accel.sh@68 -- # trap - ERR 00:06:27.330 11:44:20 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:27.330 11:44:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:27.330 11:44:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.330 11:44:20 -- common/autotest_common.sh@10 -- # set +x 00:06:27.330 11:44:20 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:27.330 11:44:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:27.330 11:44:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.330 11:44:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.330 11:44:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.330 11:44:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.330 11:44:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.330 11:44:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.330 11:44:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.330 11:44:20 -- accel/accel.sh@42 -- # jq -r . 00:06:27.330 11:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.330 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:27.330 11:44:21 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:27.330 11:44:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:27.330 11:44:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.330 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:27.330 ************************************ 00:06:27.330 START TEST accel_missing_filename 00:06:27.330 ************************************ 00:06:27.330 11:44:21 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:27.330 11:44:21 -- common/autotest_common.sh@640 -- # local es=0 00:06:27.330 11:44:21 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:27.330 11:44:21 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:27.330 11:44:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:27.330 11:44:21 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:27.330 11:44:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:27.330 11:44:21 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:27.330 11:44:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:27.330 11:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.330 11:44:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.330 11:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.330 11:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.330 11:44:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.330 11:44:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.330 11:44:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.330 11:44:21 -- accel/accel.sh@42 -- # jq -r . 00:06:27.330 [2024-06-10 11:44:21.075754] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:27.330 [2024-06-10 11:44:21.075841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741765 ] 00:06:27.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.589 [2024-06-10 11:44:21.139135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.589 [2024-06-10 11:44:21.201214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.589 [2024-06-10 11:44:21.232865] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.589 [2024-06-10 11:44:21.269875] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:27.589 A filename is required. 00:06:27.589 11:44:21 -- common/autotest_common.sh@643 -- # es=234 00:06:27.589 11:44:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:27.589 11:44:21 -- common/autotest_common.sh@652 -- # es=106 00:06:27.589 11:44:21 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:27.589 11:44:21 -- common/autotest_common.sh@660 -- # es=1 00:06:27.589 11:44:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:27.589 00:06:27.589 real 0m0.276s 00:06:27.589 user 0m0.215s 00:06:27.589 sys 0m0.102s 00:06:27.589 11:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.590 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:27.590 ************************************ 00:06:27.590 END TEST accel_missing_filename 00:06:27.590 ************************************ 00:06:27.590 11:44:21 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.590 11:44:21 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:27.590 11:44:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.590 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:27.849 ************************************ 00:06:27.849 START TEST accel_compress_verify 00:06:27.849 ************************************ 00:06:27.849 11:44:21 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.849 11:44:21 -- common/autotest_common.sh@640 -- # local es=0 00:06:27.849 11:44:21 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.849 11:44:21 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:27.849 11:44:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:27.849 11:44:21 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:27.849 11:44:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:27.849 11:44:21 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.849 11:44:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.849 11:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.849 11:44:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.849 11:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.849 11:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.849 11:44:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.849 11:44:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.849 11:44:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.849 11:44:21 -- accel/accel.sh@42 -- # jq -r . 00:06:27.849 [2024-06-10 11:44:21.394208] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:27.849 [2024-06-10 11:44:21.394286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741785 ] 00:06:27.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.849 [2024-06-10 11:44:21.454761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.849 [2024-06-10 11:44:21.515018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.849 [2024-06-10 11:44:21.546783] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.849 [2024-06-10 11:44:21.583905] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:28.110 00:06:28.110 Compression does not support the verify option, aborting. 00:06:28.110 11:44:21 -- common/autotest_common.sh@643 -- # es=161 00:06:28.110 11:44:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:28.110 11:44:21 -- common/autotest_common.sh@652 -- # es=33 00:06:28.110 11:44:21 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:28.110 11:44:21 -- common/autotest_common.sh@660 -- # es=1 00:06:28.110 11:44:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:28.111 00:06:28.111 real 0m0.272s 00:06:28.111 user 0m0.212s 00:06:28.111 sys 0m0.100s 00:06:28.111 11:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.111 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:28.111 ************************************ 00:06:28.111 END TEST accel_compress_verify 00:06:28.111 ************************************ 00:06:28.111 11:44:21 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:28.111 11:44:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:28.111 11:44:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.111 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:28.111 ************************************ 00:06:28.111 START TEST accel_wrong_workload 00:06:28.111 ************************************ 00:06:28.111 11:44:21 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:28.111 11:44:21 -- common/autotest_common.sh@640 -- # local es=0 00:06:28.111 11:44:21 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:28.111 11:44:21 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:28.111 11:44:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.111 11:44:21 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:28.111 11:44:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.111 11:44:21 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:28.111 11:44:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:28.111 11:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.111 11:44:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.111 11:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.111 11:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.111 11:44:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.111 11:44:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.111 11:44:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.111 11:44:21 -- accel/accel.sh@42 -- # jq -r . 00:06:28.111 Unsupported workload type: foobar 00:06:28.111 [2024-06-10 11:44:21.704899] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:28.111 accel_perf options: 00:06:28.111 [-h help message] 00:06:28.111 [-q queue depth per core] 00:06:28.111 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:28.111 [-T number of threads per core 00:06:28.111 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:28.111 [-t time in seconds] 00:06:28.111 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:28.111 [ dif_verify, , dif_generate, dif_generate_copy 00:06:28.111 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:28.111 [-l for compress/decompress workloads, name of uncompressed input file 00:06:28.111 [-S for crc32c workload, use this seed value (default 0) 00:06:28.111 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:28.111 [-f for fill workload, use this BYTE value (default 255) 00:06:28.111 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:28.111 [-y verify result if this switch is on] 00:06:28.111 [-a tasks to allocate per core (default: same value as -q)] 00:06:28.111 Can be used to spread operations across a wider range of memory. 00:06:28.111 11:44:21 -- common/autotest_common.sh@643 -- # es=1 00:06:28.111 11:44:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:28.111 11:44:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:28.111 11:44:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:28.111 00:06:28.111 real 0m0.035s 00:06:28.111 user 0m0.026s 00:06:28.111 sys 0m0.009s 00:06:28.111 11:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.111 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:28.111 ************************************ 00:06:28.111 END TEST accel_wrong_workload 00:06:28.111 ************************************ 00:06:28.111 Error: writing output failed: Broken pipe 00:06:28.111 11:44:21 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:28.111 11:44:21 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:28.111 11:44:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.111 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:28.111 ************************************ 00:06:28.111 START TEST accel_negative_buffers 00:06:28.111 ************************************ 00:06:28.111 11:44:21 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:28.111 11:44:21 -- common/autotest_common.sh@640 -- # local es=0 00:06:28.111 11:44:21 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:28.111 11:44:21 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:28.111 11:44:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.111 11:44:21 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:28.111 11:44:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.111 11:44:21 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:28.111 11:44:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:28.111 11:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.111 11:44:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.111 11:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.111 11:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.111 11:44:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.111 11:44:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.111 11:44:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.111 11:44:21 -- accel/accel.sh@42 -- # jq -r . 00:06:28.111 -x option must be non-negative. 00:06:28.111 [2024-06-10 11:44:21.782489] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:28.111 accel_perf options: 00:06:28.111 [-h help message] 00:06:28.111 [-q queue depth per core] 00:06:28.111 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:28.111 [-T number of threads per core 00:06:28.111 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:28.111 [-t time in seconds] 00:06:28.111 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:28.111 [ dif_verify, , dif_generate, dif_generate_copy 00:06:28.111 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:28.111 [-l for compress/decompress workloads, name of uncompressed input file 00:06:28.111 [-S for crc32c workload, use this seed value (default 0) 00:06:28.111 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:28.111 [-f for fill workload, use this BYTE value (default 255) 00:06:28.111 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:28.111 [-y verify result if this switch is on] 00:06:28.111 [-a tasks to allocate per core (default: same value as -q)] 00:06:28.111 Can be used to spread operations across a wider range of memory. 00:06:28.111 11:44:21 -- common/autotest_common.sh@643 -- # es=1 00:06:28.111 11:44:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:28.111 11:44:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:28.111 11:44:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:28.111 00:06:28.111 real 0m0.035s 00:06:28.111 user 0m0.020s 00:06:28.111 sys 0m0.015s 00:06:28.111 11:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.111 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:28.111 ************************************ 00:06:28.111 END TEST accel_negative_buffers 00:06:28.111 ************************************ 00:06:28.111 Error: writing output failed: Broken pipe 00:06:28.111 11:44:21 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:28.111 11:44:21 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:28.111 11:44:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.111 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:28.111 ************************************ 00:06:28.111 START TEST accel_crc32c 00:06:28.111 ************************************ 00:06:28.111 11:44:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:28.111 11:44:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.111 11:44:21 -- accel/accel.sh@17 -- # local accel_module 00:06:28.111 11:44:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:28.111 11:44:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:28.111 11:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.111 11:44:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.111 11:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.111 11:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.111 11:44:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.111 11:44:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.111 11:44:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.111 11:44:21 -- accel/accel.sh@42 -- # jq -r . 00:06:28.111 [2024-06-10 11:44:21.854207] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:28.111 [2024-06-10 11:44:21.854273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741847 ] 00:06:28.111 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.370 [2024-06-10 11:44:21.915134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.370 [2024-06-10 11:44:21.978505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.752 11:44:23 -- accel/accel.sh@18 -- # out=' 00:06:29.752 SPDK Configuration: 00:06:29.752 Core mask: 0x1 00:06:29.752 00:06:29.752 Accel Perf Configuration: 00:06:29.752 Workload Type: crc32c 00:06:29.752 CRC-32C seed: 32 00:06:29.752 Transfer size: 4096 bytes 00:06:29.752 Vector count 1 00:06:29.752 Module: software 00:06:29.752 Queue depth: 32 00:06:29.752 Allocate depth: 32 00:06:29.752 # threads/core: 1 00:06:29.752 Run time: 1 seconds 00:06:29.752 Verify: Yes 00:06:29.752 00:06:29.752 Running for 1 seconds... 00:06:29.752 00:06:29.752 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.752 ------------------------------------------------------------------------------------ 00:06:29.752 0,0 444608/s 1736 MiB/s 0 0 00:06:29.752 ==================================================================================== 00:06:29.752 Total 444608/s 1736 MiB/s 0 0' 00:06:29.752 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.752 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.752 11:44:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:29.752 11:44:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:29.752 11:44:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.752 11:44:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.752 11:44:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.752 11:44:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.752 11:44:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.753 11:44:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.753 11:44:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.753 11:44:23 -- accel/accel.sh@42 -- # jq -r . 00:06:29.753 [2024-06-10 11:44:23.130521] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:29.753 [2024-06-10 11:44:23.130591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742181 ] 00:06:29.753 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.753 [2024-06-10 11:44:23.191358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.753 [2024-06-10 11:44:23.253766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val= 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val= 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val=0x1 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val= 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val= 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val=crc32c 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val=32 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val= 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val=software 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val=32 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val=32 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val=1 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val=Yes 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val= 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:29.753 11:44:23 -- accel/accel.sh@21 -- # val= 00:06:29.753 11:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:29.753 11:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:30.694 11:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.694 11:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.694 11:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.694 11:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.694 11:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.694 11:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.694 11:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.694 11:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.694 11:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.694 11:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.694 11:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.694 11:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.694 11:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.694 11:44:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.694 11:44:24 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:30.694 11:44:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.694 00:06:30.694 real 0m2.557s 00:06:30.694 user 0m2.382s 00:06:30.694 sys 0m0.180s 00:06:30.694 11:44:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.694 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:06:30.694 ************************************ 00:06:30.694 END TEST accel_crc32c 00:06:30.694 ************************************ 00:06:30.694 11:44:24 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:30.694 11:44:24 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:30.694 11:44:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.694 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:06:30.694 ************************************ 00:06:30.694 START TEST accel_crc32c_C2 00:06:30.694 ************************************ 00:06:30.694 11:44:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:30.694 11:44:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.694 11:44:24 -- accel/accel.sh@17 -- # local accel_module 00:06:30.694 11:44:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:30.694 11:44:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:30.694 11:44:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.694 11:44:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.694 11:44:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.694 11:44:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.694 11:44:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.694 11:44:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.694 11:44:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.694 11:44:24 -- accel/accel.sh@42 -- # jq -r . 00:06:30.694 [2024-06-10 11:44:24.453812] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:30.694 [2024-06-10 11:44:24.453915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742532 ] 00:06:30.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.954 [2024-06-10 11:44:24.515345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.954 [2024-06-10 11:44:24.578225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.337 11:44:25 -- accel/accel.sh@18 -- # out=' 00:06:32.337 SPDK Configuration: 00:06:32.337 Core mask: 0x1 00:06:32.337 00:06:32.337 Accel Perf Configuration: 00:06:32.337 Workload Type: crc32c 00:06:32.337 CRC-32C seed: 0 00:06:32.337 Transfer size: 4096 bytes 00:06:32.337 Vector count 2 00:06:32.337 Module: software 00:06:32.337 Queue depth: 32 00:06:32.337 Allocate depth: 32 00:06:32.337 # threads/core: 1 00:06:32.337 Run time: 1 seconds 00:06:32.337 Verify: Yes 00:06:32.337 00:06:32.337 Running for 1 seconds... 00:06:32.337 00:06:32.337 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.337 ------------------------------------------------------------------------------------ 00:06:32.338 0,0 377472/s 2949 MiB/s 0 0 00:06:32.338 ==================================================================================== 00:06:32.338 Total 377472/s 1474 MiB/s 0 0' 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:32.338 11:44:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:32.338 11:44:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.338 11:44:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.338 11:44:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.338 11:44:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.338 11:44:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.338 11:44:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.338 11:44:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.338 11:44:25 -- accel/accel.sh@42 -- # jq -r . 00:06:32.338 [2024-06-10 11:44:25.729853] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:32.338 [2024-06-10 11:44:25.729931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742670 ] 00:06:32.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.338 [2024-06-10 11:44:25.790986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.338 [2024-06-10 11:44:25.855025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val= 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val= 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val=0x1 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val= 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val= 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val=crc32c 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val=0 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val= 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val=software 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val=32 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val=32 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val=1 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val=Yes 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val= 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:32.338 11:44:25 -- accel/accel.sh@21 -- # val= 00:06:32.338 11:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # IFS=: 00:06:32.338 11:44:25 -- accel/accel.sh@20 -- # read -r var val 00:06:33.279 11:44:26 -- accel/accel.sh@21 -- # val= 00:06:33.279 11:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:33.279 11:44:26 -- accel/accel.sh@21 -- # val= 00:06:33.279 11:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:33.279 11:44:26 -- accel/accel.sh@21 -- # val= 00:06:33.279 11:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:33.279 11:44:26 -- accel/accel.sh@21 -- # val= 00:06:33.279 11:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:33.279 11:44:26 -- accel/accel.sh@21 -- # val= 00:06:33.279 11:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:33.279 11:44:26 -- accel/accel.sh@21 -- # val= 00:06:33.279 11:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:33.279 11:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:33.279 11:44:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.279 11:44:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:33.279 11:44:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.279 00:06:33.279 real 0m2.559s 00:06:33.279 user 0m2.376s 00:06:33.279 sys 0m0.190s 00:06:33.279 11:44:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.279 11:44:26 -- common/autotest_common.sh@10 -- # set +x 00:06:33.279 ************************************ 00:06:33.279 END TEST accel_crc32c_C2 00:06:33.279 ************************************ 00:06:33.279 11:44:27 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:33.279 11:44:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:33.279 11:44:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.279 11:44:27 -- common/autotest_common.sh@10 -- # set +x 00:06:33.279 ************************************ 00:06:33.279 START TEST accel_copy 00:06:33.279 ************************************ 00:06:33.279 11:44:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:33.279 11:44:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.279 11:44:27 -- accel/accel.sh@17 -- # local accel_module 00:06:33.279 11:44:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:33.279 11:44:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:33.279 11:44:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.279 11:44:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.279 11:44:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.279 11:44:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.279 11:44:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.279 11:44:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.279 11:44:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.279 11:44:27 -- accel/accel.sh@42 -- # jq -r . 00:06:33.541 [2024-06-10 11:44:27.051215] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:33.541 [2024-06-10 11:44:27.051329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742922 ] 00:06:33.541 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.541 [2024-06-10 11:44:27.120499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.541 [2024-06-10 11:44:27.183815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.927 11:44:28 -- accel/accel.sh@18 -- # out=' 00:06:34.927 SPDK Configuration: 00:06:34.927 Core mask: 0x1 00:06:34.927 00:06:34.927 Accel Perf Configuration: 00:06:34.927 Workload Type: copy 00:06:34.927 Transfer size: 4096 bytes 00:06:34.927 Vector count 1 00:06:34.927 Module: software 00:06:34.927 Queue depth: 32 00:06:34.927 Allocate depth: 32 00:06:34.927 # threads/core: 1 00:06:34.927 Run time: 1 seconds 00:06:34.927 Verify: Yes 00:06:34.927 00:06:34.927 Running for 1 seconds... 00:06:34.927 00:06:34.927 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.927 ------------------------------------------------------------------------------------ 00:06:34.927 0,0 304992/s 1191 MiB/s 0 0 00:06:34.927 ==================================================================================== 00:06:34.927 Total 304992/s 1191 MiB/s 0 0' 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.927 11:44:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:34.927 11:44:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:34.927 11:44:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.927 11:44:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.927 11:44:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.927 11:44:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.927 11:44:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.927 11:44:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.927 11:44:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.927 11:44:28 -- accel/accel.sh@42 -- # jq -r . 00:06:34.927 [2024-06-10 11:44:28.335928] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:34.927 [2024-06-10 11:44:28.336000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743257 ] 00:06:34.927 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.927 [2024-06-10 11:44:28.396981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.927 [2024-06-10 11:44:28.458515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.927 11:44:28 -- accel/accel.sh@21 -- # val= 00:06:34.927 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.927 11:44:28 -- accel/accel.sh@21 -- # val= 00:06:34.927 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.927 11:44:28 -- accel/accel.sh@21 -- # val=0x1 00:06:34.927 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.927 11:44:28 -- accel/accel.sh@21 -- # val= 00:06:34.927 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.927 11:44:28 -- accel/accel.sh@21 -- # val= 00:06:34.927 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.927 11:44:28 -- accel/accel.sh@21 -- # val=copy 00:06:34.927 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.927 11:44:28 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.927 11:44:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.927 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.927 11:44:28 -- accel/accel.sh@21 -- # val= 00:06:34.927 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.927 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.927 11:44:28 -- accel/accel.sh@21 -- # val=software 00:06:34.928 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.928 11:44:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.928 11:44:28 -- accel/accel.sh@21 -- # val=32 00:06:34.928 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.928 11:44:28 -- accel/accel.sh@21 -- # val=32 00:06:34.928 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.928 11:44:28 -- accel/accel.sh@21 -- # val=1 00:06:34.928 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.928 11:44:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.928 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.928 11:44:28 -- accel/accel.sh@21 -- # val=Yes 00:06:34.928 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.928 11:44:28 -- accel/accel.sh@21 -- # val= 00:06:34.928 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.928 11:44:28 -- accel/accel.sh@21 -- # val= 00:06:34.928 11:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.928 11:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:35.872 11:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.872 11:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.872 11:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.872 11:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.872 11:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.872 11:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.872 11:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.872 11:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.872 11:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.872 11:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.872 11:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.872 11:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.872 11:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.872 11:44:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.872 11:44:29 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:35.872 11:44:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.872 00:06:35.872 real 0m2.565s 00:06:35.872 user 0m2.364s 00:06:35.872 sys 0m0.205s 00:06:35.872 11:44:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.872 11:44:29 -- common/autotest_common.sh@10 -- # set +x 00:06:35.872 ************************************ 00:06:35.872 END TEST accel_copy 00:06:35.872 ************************************ 00:06:35.872 11:44:29 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.872 11:44:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:35.872 11:44:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.872 11:44:29 -- common/autotest_common.sh@10 -- # set +x 00:06:35.872 ************************************ 00:06:35.872 START TEST accel_fill 00:06:35.872 ************************************ 00:06:35.872 11:44:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.872 11:44:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.872 11:44:29 -- accel/accel.sh@17 -- # local accel_module 00:06:35.872 11:44:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.872 11:44:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.872 11:44:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.872 11:44:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.872 11:44:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.872 11:44:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.872 11:44:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.872 11:44:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.872 11:44:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.872 11:44:29 -- accel/accel.sh@42 -- # jq -r . 00:06:36.133 [2024-06-10 11:44:29.654722] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:36.133 [2024-06-10 11:44:29.654800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743608 ] 00:06:36.133 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.133 [2024-06-10 11:44:29.715414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.133 [2024-06-10 11:44:29.778996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.518 11:44:30 -- accel/accel.sh@18 -- # out=' 00:06:37.518 SPDK Configuration: 00:06:37.518 Core mask: 0x1 00:06:37.518 00:06:37.518 Accel Perf Configuration: 00:06:37.518 Workload Type: fill 00:06:37.518 Fill pattern: 0x80 00:06:37.518 Transfer size: 4096 bytes 00:06:37.518 Vector count 1 00:06:37.518 Module: software 00:06:37.518 Queue depth: 64 00:06:37.518 Allocate depth: 64 00:06:37.518 # threads/core: 1 00:06:37.518 Run time: 1 seconds 00:06:37.518 Verify: Yes 00:06:37.518 00:06:37.518 Running for 1 seconds... 00:06:37.518 00:06:37.518 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.518 ------------------------------------------------------------------------------------ 00:06:37.518 0,0 466880/s 1823 MiB/s 0 0 00:06:37.518 ==================================================================================== 00:06:37.518 Total 466880/s 1823 MiB/s 0 0' 00:06:37.518 11:44:30 -- accel/accel.sh@20 -- # IFS=: 00:06:37.518 11:44:30 -- accel/accel.sh@20 -- # read -r var val 00:06:37.518 11:44:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.518 11:44:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.518 11:44:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.518 11:44:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.518 11:44:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.518 11:44:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.518 11:44:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.518 11:44:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.518 11:44:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.518 11:44:30 -- accel/accel.sh@42 -- # jq -r . 00:06:37.518 [2024-06-10 11:44:30.932800] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:37.518 [2024-06-10 11:44:30.932901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743837 ] 00:06:37.518 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.518 [2024-06-10 11:44:30.996793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.518 [2024-06-10 11:44:31.059747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.518 11:44:31 -- accel/accel.sh@21 -- # val= 00:06:37.518 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.518 11:44:31 -- accel/accel.sh@21 -- # val= 00:06:37.518 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.518 11:44:31 -- accel/accel.sh@21 -- # val=0x1 00:06:37.518 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.518 11:44:31 -- accel/accel.sh@21 -- # val= 00:06:37.518 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.518 11:44:31 -- accel/accel.sh@21 -- # val= 00:06:37.518 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.518 11:44:31 -- accel/accel.sh@21 -- # val=fill 00:06:37.518 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.518 11:44:31 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.518 11:44:31 -- accel/accel.sh@21 -- # val=0x80 00:06:37.518 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.518 11:44:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.518 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.518 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.519 11:44:31 -- accel/accel.sh@21 -- # val= 00:06:37.519 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.519 11:44:31 -- accel/accel.sh@21 -- # val=software 00:06:37.519 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.519 11:44:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.519 11:44:31 -- accel/accel.sh@21 -- # val=64 00:06:37.519 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.519 11:44:31 -- accel/accel.sh@21 -- # val=64 00:06:37.519 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.519 11:44:31 -- accel/accel.sh@21 -- # val=1 00:06:37.519 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.519 11:44:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.519 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.519 11:44:31 -- accel/accel.sh@21 -- # val=Yes 00:06:37.519 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.519 11:44:31 -- accel/accel.sh@21 -- # val= 00:06:37.519 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.519 11:44:31 -- accel/accel.sh@21 -- # val= 00:06:37.519 11:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:37.519 11:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:38.469 11:44:32 -- accel/accel.sh@21 -- # val= 00:06:38.469 11:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.469 11:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:38.469 11:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:38.469 11:44:32 -- accel/accel.sh@21 -- # val= 00:06:38.469 11:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.469 11:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:38.470 11:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:38.470 11:44:32 -- accel/accel.sh@21 -- # val= 00:06:38.470 11:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.470 11:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:38.470 11:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:38.470 11:44:32 -- accel/accel.sh@21 -- # val= 00:06:38.470 11:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.470 11:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:38.470 11:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:38.470 11:44:32 -- accel/accel.sh@21 -- # val= 00:06:38.470 11:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.470 11:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:38.470 11:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:38.470 11:44:32 -- accel/accel.sh@21 -- # val= 00:06:38.470 11:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.470 11:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:38.470 11:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:38.470 11:44:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.470 11:44:32 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:38.470 11:44:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.470 00:06:38.470 real 0m2.563s 00:06:38.470 user 0m2.372s 00:06:38.470 sys 0m0.196s 00:06:38.470 11:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.470 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:06:38.470 ************************************ 00:06:38.470 END TEST accel_fill 00:06:38.470 ************************************ 00:06:38.470 11:44:32 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:38.470 11:44:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:38.470 11:44:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.470 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:06:38.470 ************************************ 00:06:38.470 START TEST accel_copy_crc32c 00:06:38.470 ************************************ 00:06:38.470 11:44:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:38.470 11:44:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.470 11:44:32 -- accel/accel.sh@17 -- # local accel_module 00:06:38.470 11:44:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:38.470 11:44:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:38.470 11:44:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.470 11:44:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.470 11:44:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.470 11:44:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.470 11:44:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.470 11:44:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.470 11:44:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.470 11:44:32 -- accel/accel.sh@42 -- # jq -r . 00:06:38.731 [2024-06-10 11:44:32.258787] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:38.731 [2024-06-10 11:44:32.258890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744005 ] 00:06:38.731 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.731 [2024-06-10 11:44:32.320755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.731 [2024-06-10 11:44:32.384921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.116 11:44:33 -- accel/accel.sh@18 -- # out=' 00:06:40.116 SPDK Configuration: 00:06:40.116 Core mask: 0x1 00:06:40.116 00:06:40.116 Accel Perf Configuration: 00:06:40.116 Workload Type: copy_crc32c 00:06:40.116 CRC-32C seed: 0 00:06:40.116 Vector size: 4096 bytes 00:06:40.116 Transfer size: 4096 bytes 00:06:40.116 Vector count 1 00:06:40.116 Module: software 00:06:40.116 Queue depth: 32 00:06:40.116 Allocate depth: 32 00:06:40.116 # threads/core: 1 00:06:40.116 Run time: 1 seconds 00:06:40.116 Verify: Yes 00:06:40.116 00:06:40.116 Running for 1 seconds... 00:06:40.116 00:06:40.116 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.116 ------------------------------------------------------------------------------------ 00:06:40.116 0,0 246432/s 962 MiB/s 0 0 00:06:40.116 ==================================================================================== 00:06:40.116 Total 246432/s 962 MiB/s 0 0' 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.116 11:44:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:40.116 11:44:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:40.116 11:44:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.116 11:44:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.116 11:44:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.116 11:44:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.116 11:44:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.116 11:44:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.116 11:44:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.116 11:44:33 -- accel/accel.sh@42 -- # jq -r . 00:06:40.116 [2024-06-10 11:44:33.538593] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:40.116 [2024-06-10 11:44:33.538698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744316 ] 00:06:40.116 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.116 [2024-06-10 11:44:33.601481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.116 [2024-06-10 11:44:33.663600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.116 11:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.116 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.116 11:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.116 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.116 11:44:33 -- accel/accel.sh@21 -- # val=0x1 00:06:40.116 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.116 11:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.116 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.116 11:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.116 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.116 11:44:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:40.116 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.116 11:44:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.116 11:44:33 -- accel/accel.sh@21 -- # val=0 00:06:40.116 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.116 11:44:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.116 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.116 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.116 11:44:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.116 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.117 11:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.117 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.117 11:44:33 -- accel/accel.sh@21 -- # val=software 00:06:40.117 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.117 11:44:33 -- accel/accel.sh@21 -- # val=32 00:06:40.117 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.117 11:44:33 -- accel/accel.sh@21 -- # val=32 00:06:40.117 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.117 11:44:33 -- accel/accel.sh@21 -- # val=1 00:06:40.117 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.117 11:44:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.117 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.117 11:44:33 -- accel/accel.sh@21 -- # val=Yes 00:06:40.117 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.117 11:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.117 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.117 11:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.117 11:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.117 11:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:41.059 11:44:34 -- accel/accel.sh@21 -- # val= 00:06:41.059 11:44:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # IFS=: 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # read -r var val 00:06:41.059 11:44:34 -- accel/accel.sh@21 -- # val= 00:06:41.059 11:44:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # IFS=: 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # read -r var val 00:06:41.059 11:44:34 -- accel/accel.sh@21 -- # val= 00:06:41.059 11:44:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # IFS=: 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # read -r var val 00:06:41.059 11:44:34 -- accel/accel.sh@21 -- # val= 00:06:41.059 11:44:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # IFS=: 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # read -r var val 00:06:41.059 11:44:34 -- accel/accel.sh@21 -- # val= 00:06:41.059 11:44:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # IFS=: 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # read -r var val 00:06:41.059 11:44:34 -- accel/accel.sh@21 -- # val= 00:06:41.059 11:44:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # IFS=: 00:06:41.059 11:44:34 -- accel/accel.sh@20 -- # read -r var val 00:06:41.059 11:44:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.059 11:44:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:41.059 11:44:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.059 00:06:41.059 real 0m2.563s 00:06:41.059 user 0m2.368s 00:06:41.059 sys 0m0.201s 00:06:41.059 11:44:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.059 11:44:34 -- common/autotest_common.sh@10 -- # set +x 00:06:41.059 ************************************ 00:06:41.059 END TEST accel_copy_crc32c 00:06:41.059 ************************************ 00:06:41.059 11:44:34 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.059 11:44:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:41.059 11:44:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.059 11:44:34 -- common/autotest_common.sh@10 -- # set +x 00:06:41.320 ************************************ 00:06:41.320 START TEST accel_copy_crc32c_C2 00:06:41.320 ************************************ 00:06:41.320 11:44:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.320 11:44:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.320 11:44:34 -- accel/accel.sh@17 -- # local accel_module 00:06:41.320 11:44:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:41.320 11:44:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:41.320 11:44:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.320 11:44:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.320 11:44:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.320 11:44:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.320 11:44:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.320 11:44:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.320 11:44:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.320 11:44:34 -- accel/accel.sh@42 -- # jq -r . 00:06:41.320 [2024-06-10 11:44:34.859522] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:41.320 [2024-06-10 11:44:34.859595] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744668 ] 00:06:41.320 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.320 [2024-06-10 11:44:34.919793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.320 [2024-06-10 11:44:34.981010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.705 11:44:36 -- accel/accel.sh@18 -- # out=' 00:06:42.705 SPDK Configuration: 00:06:42.705 Core mask: 0x1 00:06:42.705 00:06:42.705 Accel Perf Configuration: 00:06:42.705 Workload Type: copy_crc32c 00:06:42.705 CRC-32C seed: 0 00:06:42.705 Vector size: 4096 bytes 00:06:42.705 Transfer size: 8192 bytes 00:06:42.705 Vector count 2 00:06:42.705 Module: software 00:06:42.705 Queue depth: 32 00:06:42.705 Allocate depth: 32 00:06:42.705 # threads/core: 1 00:06:42.705 Run time: 1 seconds 00:06:42.705 Verify: Yes 00:06:42.705 00:06:42.705 Running for 1 seconds... 00:06:42.705 00:06:42.705 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.705 ------------------------------------------------------------------------------------ 00:06:42.705 0,0 184448/s 1441 MiB/s 0 0 00:06:42.705 ==================================================================================== 00:06:42.705 Total 184448/s 720 MiB/s 0 0' 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:42.705 11:44:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:42.705 11:44:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.705 11:44:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.705 11:44:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.705 11:44:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.705 11:44:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.705 11:44:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.705 11:44:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.705 11:44:36 -- accel/accel.sh@42 -- # jq -r . 00:06:42.705 [2024-06-10 11:44:36.133316] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:42.705 [2024-06-10 11:44:36.133417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744977 ] 00:06:42.705 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.705 [2024-06-10 11:44:36.194853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.705 [2024-06-10 11:44:36.256466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val= 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val= 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val=0x1 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val= 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val= 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val=0 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val= 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val=software 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val=32 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val=32 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val=1 00:06:42.705 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 11:44:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.706 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.706 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.706 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.706 11:44:36 -- accel/accel.sh@21 -- # val=Yes 00:06:42.706 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.706 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.706 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.706 11:44:36 -- accel/accel.sh@21 -- # val= 00:06:42.706 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.706 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.706 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:42.706 11:44:36 -- accel/accel.sh@21 -- # val= 00:06:42.706 11:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.706 11:44:36 -- accel/accel.sh@20 -- # IFS=: 00:06:42.706 11:44:36 -- accel/accel.sh@20 -- # read -r var val 00:06:43.648 11:44:37 -- accel/accel.sh@21 -- # val= 00:06:43.648 11:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.648 11:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:43.648 11:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:43.648 11:44:37 -- accel/accel.sh@21 -- # val= 00:06:43.648 11:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:43.649 11:44:37 -- accel/accel.sh@21 -- # val= 00:06:43.649 11:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:43.649 11:44:37 -- accel/accel.sh@21 -- # val= 00:06:43.649 11:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:43.649 11:44:37 -- accel/accel.sh@21 -- # val= 00:06:43.649 11:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:43.649 11:44:37 -- accel/accel.sh@21 -- # val= 00:06:43.649 11:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:43.649 11:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:43.649 11:44:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.649 11:44:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:43.649 11:44:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.649 00:06:43.649 real 0m2.554s 00:06:43.649 user 0m2.366s 00:06:43.649 sys 0m0.194s 00:06:43.649 11:44:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.649 11:44:37 -- common/autotest_common.sh@10 -- # set +x 00:06:43.649 ************************************ 00:06:43.649 END TEST accel_copy_crc32c_C2 00:06:43.649 ************************************ 00:06:43.649 11:44:37 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:43.649 11:44:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:43.649 11:44:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.910 11:44:37 -- common/autotest_common.sh@10 -- # set +x 00:06:43.910 ************************************ 00:06:43.910 START TEST accel_dualcast 00:06:43.910 ************************************ 00:06:43.910 11:44:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:43.910 11:44:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.910 11:44:37 -- accel/accel.sh@17 -- # local accel_module 00:06:43.910 11:44:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:43.910 11:44:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:43.910 11:44:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.910 11:44:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.910 11:44:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.910 11:44:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.910 11:44:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.910 11:44:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.910 11:44:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.910 11:44:37 -- accel/accel.sh@42 -- # jq -r . 00:06:43.910 [2024-06-10 11:44:37.453819] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:43.910 [2024-06-10 11:44:37.453920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745149 ] 00:06:43.911 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.911 [2024-06-10 11:44:37.516464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.911 [2024-06-10 11:44:37.580897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.296 11:44:38 -- accel/accel.sh@18 -- # out=' 00:06:45.296 SPDK Configuration: 00:06:45.296 Core mask: 0x1 00:06:45.296 00:06:45.296 Accel Perf Configuration: 00:06:45.296 Workload Type: dualcast 00:06:45.296 Transfer size: 4096 bytes 00:06:45.296 Vector count 1 00:06:45.296 Module: software 00:06:45.296 Queue depth: 32 00:06:45.296 Allocate depth: 32 00:06:45.296 # threads/core: 1 00:06:45.296 Run time: 1 seconds 00:06:45.296 Verify: Yes 00:06:45.296 00:06:45.296 Running for 1 seconds... 00:06:45.296 00:06:45.296 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.296 ------------------------------------------------------------------------------------ 00:06:45.296 0,0 362240/s 1415 MiB/s 0 0 00:06:45.296 ==================================================================================== 00:06:45.296 Total 362240/s 1415 MiB/s 0 0' 00:06:45.296 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.296 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.296 11:44:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:45.296 11:44:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:45.296 11:44:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.296 11:44:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.297 11:44:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.297 11:44:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.297 11:44:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.297 11:44:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.297 11:44:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.297 11:44:38 -- accel/accel.sh@42 -- # jq -r . 00:06:45.297 [2024-06-10 11:44:38.731814] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:45.297 [2024-06-10 11:44:38.731887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745379 ] 00:06:45.297 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.297 [2024-06-10 11:44:38.792540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.297 [2024-06-10 11:44:38.854401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val= 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val= 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val=0x1 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val= 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val= 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val=dualcast 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val= 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val=software 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val=32 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val=32 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val=1 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val=Yes 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val= 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:45.297 11:44:38 -- accel/accel.sh@21 -- # val= 00:06:45.297 11:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:45.297 11:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:46.238 11:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.238 11:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.238 11:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.238 11:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.238 11:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.238 11:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.238 11:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.238 11:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.238 11:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.238 11:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.238 11:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.238 11:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.238 11:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.238 11:44:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.238 11:44:39 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:46.238 11:44:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.238 00:06:46.238 real 0m2.559s 00:06:46.238 user 0m2.366s 00:06:46.238 sys 0m0.197s 00:06:46.238 11:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.238 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:06:46.238 ************************************ 00:06:46.238 END TEST accel_dualcast 00:06:46.238 ************************************ 00:06:46.499 11:44:40 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:46.499 11:44:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.499 11:44:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.499 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:06:46.499 ************************************ 00:06:46.499 START TEST accel_compare 00:06:46.499 ************************************ 00:06:46.499 11:44:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:46.499 11:44:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.499 11:44:40 -- accel/accel.sh@17 -- # local accel_module 00:06:46.499 11:44:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:46.499 11:44:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:46.499 11:44:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.499 11:44:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.499 11:44:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.499 11:44:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.499 11:44:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.499 11:44:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.499 11:44:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.499 11:44:40 -- accel/accel.sh@42 -- # jq -r . 00:06:46.499 [2024-06-10 11:44:40.054606] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:46.499 [2024-06-10 11:44:40.054679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745736 ] 00:06:46.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.499 [2024-06-10 11:44:40.115978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.499 [2024-06-10 11:44:40.179252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.887 11:44:41 -- accel/accel.sh@18 -- # out=' 00:06:47.887 SPDK Configuration: 00:06:47.887 Core mask: 0x1 00:06:47.887 00:06:47.887 Accel Perf Configuration: 00:06:47.887 Workload Type: compare 00:06:47.887 Transfer size: 4096 bytes 00:06:47.887 Vector count 1 00:06:47.887 Module: software 00:06:47.887 Queue depth: 32 00:06:47.887 Allocate depth: 32 00:06:47.887 # threads/core: 1 00:06:47.887 Run time: 1 seconds 00:06:47.887 Verify: Yes 00:06:47.887 00:06:47.887 Running for 1 seconds... 00:06:47.887 00:06:47.887 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.887 ------------------------------------------------------------------------------------ 00:06:47.887 0,0 434592/s 1697 MiB/s 0 0 00:06:47.887 ==================================================================================== 00:06:47.887 Total 434592/s 1697 MiB/s 0 0' 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:47.887 11:44:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:47.887 11:44:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.887 11:44:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.887 11:44:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.887 11:44:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.887 11:44:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.887 11:44:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.887 11:44:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.887 11:44:41 -- accel/accel.sh@42 -- # jq -r . 00:06:47.887 [2024-06-10 11:44:41.331772] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:47.887 [2024-06-10 11:44:41.331849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746071 ] 00:06:47.887 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.887 [2024-06-10 11:44:41.392634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.887 [2024-06-10 11:44:41.454741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val= 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val= 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val=0x1 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val= 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val= 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val=compare 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val= 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val=software 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val=32 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val=32 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val=1 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val=Yes 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val= 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:47.887 11:44:41 -- accel/accel.sh@21 -- # val= 00:06:47.887 11:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:47.887 11:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:48.831 11:44:42 -- accel/accel.sh@21 -- # val= 00:06:48.831 11:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:48.831 11:44:42 -- accel/accel.sh@21 -- # val= 00:06:48.831 11:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:48.831 11:44:42 -- accel/accel.sh@21 -- # val= 00:06:48.831 11:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:48.831 11:44:42 -- accel/accel.sh@21 -- # val= 00:06:48.831 11:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:48.831 11:44:42 -- accel/accel.sh@21 -- # val= 00:06:48.831 11:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:48.831 11:44:42 -- accel/accel.sh@21 -- # val= 00:06:48.831 11:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:48.831 11:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:48.831 11:44:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.831 11:44:42 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:48.831 11:44:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.831 00:06:48.831 real 0m2.557s 00:06:48.831 user 0m2.370s 00:06:48.831 sys 0m0.192s 00:06:48.831 11:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.831 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:06:48.831 ************************************ 00:06:48.831 END TEST accel_compare 00:06:48.831 ************************************ 00:06:49.092 11:44:42 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:49.092 11:44:42 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:49.092 11:44:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.092 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:06:49.092 ************************************ 00:06:49.092 START TEST accel_xor 00:06:49.092 ************************************ 00:06:49.092 11:44:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:49.092 11:44:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.092 11:44:42 -- accel/accel.sh@17 -- # local accel_module 00:06:49.092 11:44:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:49.092 11:44:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:49.092 11:44:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.092 11:44:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.092 11:44:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.092 11:44:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.092 11:44:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.092 11:44:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.092 11:44:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.092 11:44:42 -- accel/accel.sh@42 -- # jq -r . 00:06:49.092 [2024-06-10 11:44:42.653003] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:49.092 [2024-06-10 11:44:42.653077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746283 ] 00:06:49.092 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.092 [2024-06-10 11:44:42.715723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.092 [2024-06-10 11:44:42.781501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.478 11:44:43 -- accel/accel.sh@18 -- # out=' 00:06:50.478 SPDK Configuration: 00:06:50.478 Core mask: 0x1 00:06:50.478 00:06:50.478 Accel Perf Configuration: 00:06:50.478 Workload Type: xor 00:06:50.478 Source buffers: 2 00:06:50.478 Transfer size: 4096 bytes 00:06:50.478 Vector count 1 00:06:50.478 Module: software 00:06:50.478 Queue depth: 32 00:06:50.478 Allocate depth: 32 00:06:50.478 # threads/core: 1 00:06:50.478 Run time: 1 seconds 00:06:50.478 Verify: Yes 00:06:50.478 00:06:50.478 Running for 1 seconds... 00:06:50.478 00:06:50.478 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.478 ------------------------------------------------------------------------------------ 00:06:50.478 0,0 361440/s 1411 MiB/s 0 0 00:06:50.478 ==================================================================================== 00:06:50.478 Total 361440/s 1411 MiB/s 0 0' 00:06:50.478 11:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:50.478 11:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:50.478 11:44:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:50.478 11:44:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:50.478 11:44:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.478 11:44:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.478 11:44:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.478 11:44:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.478 11:44:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.478 11:44:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.478 11:44:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.478 11:44:43 -- accel/accel.sh@42 -- # jq -r . 00:06:50.478 [2024-06-10 11:44:43.932410] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:50.478 [2024-06-10 11:44:43.932504] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746445 ] 00:06:50.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.478 [2024-06-10 11:44:43.995324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.478 [2024-06-10 11:44:44.059142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.478 11:44:44 -- accel/accel.sh@21 -- # val= 00:06:50.478 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.478 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.478 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.478 11:44:44 -- accel/accel.sh@21 -- # val= 00:06:50.478 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.478 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.478 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val=0x1 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val= 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val= 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val=xor 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val=2 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val= 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val=software 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val=32 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val=32 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val=1 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val=Yes 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val= 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:50.479 11:44:44 -- accel/accel.sh@21 -- # val= 00:06:50.479 11:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:50.479 11:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:51.422 11:44:45 -- accel/accel.sh@21 -- # val= 00:06:51.422 11:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:51.422 11:44:45 -- accel/accel.sh@21 -- # val= 00:06:51.422 11:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:51.422 11:44:45 -- accel/accel.sh@21 -- # val= 00:06:51.422 11:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:51.422 11:44:45 -- accel/accel.sh@21 -- # val= 00:06:51.422 11:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:51.422 11:44:45 -- accel/accel.sh@21 -- # val= 00:06:51.422 11:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:51.422 11:44:45 -- accel/accel.sh@21 -- # val= 00:06:51.422 11:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:51.422 11:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:51.422 11:44:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.422 11:44:45 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:51.422 11:44:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.422 00:06:51.422 real 0m2.564s 00:06:51.422 user 0m2.373s 00:06:51.422 sys 0m0.196s 00:06:51.422 11:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.422 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:06:51.422 ************************************ 00:06:51.422 END TEST accel_xor 00:06:51.422 ************************************ 00:06:51.683 11:44:45 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:51.683 11:44:45 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:51.683 11:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.683 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:06:51.683 ************************************ 00:06:51.683 START TEST accel_xor 00:06:51.683 ************************************ 00:06:51.683 11:44:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:51.683 11:44:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.683 11:44:45 -- accel/accel.sh@17 -- # local accel_module 00:06:51.683 11:44:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:51.683 11:44:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:51.683 11:44:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.683 11:44:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.683 11:44:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.683 11:44:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.683 11:44:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.683 11:44:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.683 11:44:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.683 11:44:45 -- accel/accel.sh@42 -- # jq -r . 00:06:51.683 [2024-06-10 11:44:45.256379] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:51.683 [2024-06-10 11:44:45.256504] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746796 ] 00:06:51.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.683 [2024-06-10 11:44:45.326916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.683 [2024-06-10 11:44:45.391248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.069 11:44:46 -- accel/accel.sh@18 -- # out=' 00:06:53.069 SPDK Configuration: 00:06:53.069 Core mask: 0x1 00:06:53.069 00:06:53.069 Accel Perf Configuration: 00:06:53.069 Workload Type: xor 00:06:53.069 Source buffers: 3 00:06:53.069 Transfer size: 4096 bytes 00:06:53.069 Vector count 1 00:06:53.069 Module: software 00:06:53.069 Queue depth: 32 00:06:53.069 Allocate depth: 32 00:06:53.069 # threads/core: 1 00:06:53.069 Run time: 1 seconds 00:06:53.069 Verify: Yes 00:06:53.069 00:06:53.069 Running for 1 seconds... 00:06:53.069 00:06:53.069 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.069 ------------------------------------------------------------------------------------ 00:06:53.069 0,0 343648/s 1342 MiB/s 0 0 00:06:53.069 ==================================================================================== 00:06:53.069 Total 343648/s 1342 MiB/s 0 0' 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:53.069 11:44:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:53.069 11:44:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.069 11:44:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.069 11:44:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.069 11:44:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.069 11:44:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.069 11:44:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.069 11:44:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.069 11:44:46 -- accel/accel.sh@42 -- # jq -r . 00:06:53.069 [2024-06-10 11:44:46.544116] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:53.069 [2024-06-10 11:44:46.544194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747132 ] 00:06:53.069 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.069 [2024-06-10 11:44:46.605186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.069 [2024-06-10 11:44:46.667172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val= 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val= 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val=0x1 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val= 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val= 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val=xor 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val=3 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val= 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val=software 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val=32 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val=32 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val=1 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val=Yes 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val= 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.069 11:44:46 -- accel/accel.sh@21 -- # val= 00:06:53.069 11:44:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.069 11:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:54.452 11:44:47 -- accel/accel.sh@21 -- # val= 00:06:54.452 11:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:54.452 11:44:47 -- accel/accel.sh@21 -- # val= 00:06:54.452 11:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:54.452 11:44:47 -- accel/accel.sh@21 -- # val= 00:06:54.452 11:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:54.452 11:44:47 -- accel/accel.sh@21 -- # val= 00:06:54.452 11:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:54.452 11:44:47 -- accel/accel.sh@21 -- # val= 00:06:54.452 11:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:54.452 11:44:47 -- accel/accel.sh@21 -- # val= 00:06:54.452 11:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:54.452 11:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:54.452 11:44:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.452 11:44:47 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:54.452 11:44:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.452 00:06:54.452 real 0m2.570s 00:06:54.452 user 0m2.374s 00:06:54.452 sys 0m0.201s 00:06:54.452 11:44:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.452 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:06:54.452 ************************************ 00:06:54.452 END TEST accel_xor 00:06:54.452 ************************************ 00:06:54.452 11:44:47 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:54.452 11:44:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:54.452 11:44:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.452 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:06:54.452 ************************************ 00:06:54.452 START TEST accel_dif_verify 00:06:54.452 ************************************ 00:06:54.452 11:44:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:54.452 11:44:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.452 11:44:47 -- accel/accel.sh@17 -- # local accel_module 00:06:54.452 11:44:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:54.452 11:44:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.452 11:44:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.452 11:44:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.452 11:44:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.452 11:44:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.452 11:44:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.452 11:44:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.452 11:44:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.452 11:44:47 -- accel/accel.sh@42 -- # jq -r . 00:06:54.452 [2024-06-10 11:44:47.865346] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:54.452 [2024-06-10 11:44:47.865415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747456 ] 00:06:54.452 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.452 [2024-06-10 11:44:47.926096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.452 [2024-06-10 11:44:47.989267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.488 11:44:49 -- accel/accel.sh@18 -- # out=' 00:06:55.488 SPDK Configuration: 00:06:55.488 Core mask: 0x1 00:06:55.488 00:06:55.488 Accel Perf Configuration: 00:06:55.488 Workload Type: dif_verify 00:06:55.488 Vector size: 4096 bytes 00:06:55.488 Transfer size: 4096 bytes 00:06:55.488 Block size: 512 bytes 00:06:55.488 Metadata size: 8 bytes 00:06:55.488 Vector count 1 00:06:55.488 Module: software 00:06:55.488 Queue depth: 32 00:06:55.488 Allocate depth: 32 00:06:55.488 # threads/core: 1 00:06:55.488 Run time: 1 seconds 00:06:55.488 Verify: No 00:06:55.488 00:06:55.488 Running for 1 seconds... 00:06:55.488 00:06:55.488 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.488 ------------------------------------------------------------------------------------ 00:06:55.488 0,0 95040/s 377 MiB/s 0 0 00:06:55.488 ==================================================================================== 00:06:55.488 Total 95040/s 371 MiB/s 0 0' 00:06:55.488 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.488 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.488 11:44:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:55.488 11:44:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:55.488 11:44:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.488 11:44:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.488 11:44:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.488 11:44:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.488 11:44:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.488 11:44:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.488 11:44:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.488 11:44:49 -- accel/accel.sh@42 -- # jq -r . 00:06:55.488 [2024-06-10 11:44:49.141693] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:55.488 [2024-06-10 11:44:49.141795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747577 ] 00:06:55.488 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.488 [2024-06-10 11:44:49.203491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.760 [2024-06-10 11:44:49.265957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val=0x1 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val=dif_verify 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val=software 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val=32 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val=32 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val=1 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val=No 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.760 11:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.760 11:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.760 11:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:56.702 11:44:50 -- accel/accel.sh@21 -- # val= 00:06:56.703 11:44:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # IFS=: 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # read -r var val 00:06:56.703 11:44:50 -- accel/accel.sh@21 -- # val= 00:06:56.703 11:44:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # IFS=: 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # read -r var val 00:06:56.703 11:44:50 -- accel/accel.sh@21 -- # val= 00:06:56.703 11:44:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # IFS=: 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # read -r var val 00:06:56.703 11:44:50 -- accel/accel.sh@21 -- # val= 00:06:56.703 11:44:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # IFS=: 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # read -r var val 00:06:56.703 11:44:50 -- accel/accel.sh@21 -- # val= 00:06:56.703 11:44:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # IFS=: 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # read -r var val 00:06:56.703 11:44:50 -- accel/accel.sh@21 -- # val= 00:06:56.703 11:44:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # IFS=: 00:06:56.703 11:44:50 -- accel/accel.sh@20 -- # read -r var val 00:06:56.703 11:44:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.703 11:44:50 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:56.703 11:44:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.703 00:06:56.703 real 0m2.558s 00:06:56.703 user 0m2.369s 00:06:56.703 sys 0m0.197s 00:06:56.703 11:44:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.703 11:44:50 -- common/autotest_common.sh@10 -- # set +x 00:06:56.703 ************************************ 00:06:56.703 END TEST accel_dif_verify 00:06:56.703 ************************************ 00:06:56.703 11:44:50 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:56.703 11:44:50 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:56.703 11:44:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.703 11:44:50 -- common/autotest_common.sh@10 -- # set +x 00:06:56.703 ************************************ 00:06:56.703 START TEST accel_dif_generate 00:06:56.703 ************************************ 00:06:56.703 11:44:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:56.703 11:44:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.703 11:44:50 -- accel/accel.sh@17 -- # local accel_module 00:06:56.703 11:44:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:56.703 11:44:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:56.703 11:44:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.703 11:44:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.703 11:44:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.703 11:44:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.703 11:44:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.703 11:44:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.703 11:44:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.703 11:44:50 -- accel/accel.sh@42 -- # jq -r . 00:06:56.703 [2024-06-10 11:44:50.467023] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:56.703 [2024-06-10 11:44:50.467125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747859 ] 00:06:56.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.964 [2024-06-10 11:44:50.530748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.964 [2024-06-10 11:44:50.595063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.348 11:44:51 -- accel/accel.sh@18 -- # out=' 00:06:58.348 SPDK Configuration: 00:06:58.348 Core mask: 0x1 00:06:58.348 00:06:58.348 Accel Perf Configuration: 00:06:58.348 Workload Type: dif_generate 00:06:58.348 Vector size: 4096 bytes 00:06:58.348 Transfer size: 4096 bytes 00:06:58.348 Block size: 512 bytes 00:06:58.348 Metadata size: 8 bytes 00:06:58.348 Vector count 1 00:06:58.348 Module: software 00:06:58.348 Queue depth: 32 00:06:58.348 Allocate depth: 32 00:06:58.348 # threads/core: 1 00:06:58.348 Run time: 1 seconds 00:06:58.348 Verify: No 00:06:58.348 00:06:58.348 Running for 1 seconds... 00:06:58.348 00:06:58.348 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.348 ------------------------------------------------------------------------------------ 00:06:58.348 0,0 114496/s 454 MiB/s 0 0 00:06:58.348 ==================================================================================== 00:06:58.348 Total 114496/s 447 MiB/s 0 0' 00:06:58.348 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.348 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.348 11:44:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:58.348 11:44:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:58.348 11:44:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.348 11:44:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.348 11:44:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.348 11:44:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.348 11:44:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.348 11:44:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.348 11:44:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.348 11:44:51 -- accel/accel.sh@42 -- # jq -r . 00:06:58.348 [2024-06-10 11:44:51.748860] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:58.348 [2024-06-10 11:44:51.748963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748195 ] 00:06:58.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.348 [2024-06-10 11:44:51.811278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.349 [2024-06-10 11:44:51.873299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val= 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val= 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val=0x1 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val= 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val= 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val=dif_generate 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val= 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val=software 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val=32 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val=32 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val=1 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val=No 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val= 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:58.349 11:44:51 -- accel/accel.sh@21 -- # val= 00:06:58.349 11:44:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:58.349 11:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 11:44:52 -- accel/accel.sh@21 -- # val= 00:06:59.290 11:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 11:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 11:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 11:44:52 -- accel/accel.sh@21 -- # val= 00:06:59.290 11:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 11:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 11:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 11:44:52 -- accel/accel.sh@21 -- # val= 00:06:59.290 11:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 11:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 11:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 11:44:52 -- accel/accel.sh@21 -- # val= 00:06:59.290 11:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 11:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 11:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 11:44:52 -- accel/accel.sh@21 -- # val= 00:06:59.290 11:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 11:44:53 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 11:44:53 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 11:44:53 -- accel/accel.sh@21 -- # val= 00:06:59.290 11:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 11:44:53 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 11:44:53 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 11:44:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.290 11:44:53 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:59.290 11:44:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.290 00:06:59.290 real 0m2.565s 00:06:59.290 user 0m2.368s 00:06:59.290 sys 0m0.203s 00:06:59.290 11:44:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.290 11:44:53 -- common/autotest_common.sh@10 -- # set +x 00:06:59.290 ************************************ 00:06:59.290 END TEST accel_dif_generate 00:06:59.290 ************************************ 00:06:59.290 11:44:53 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:59.290 11:44:53 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:59.290 11:44:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.290 11:44:53 -- common/autotest_common.sh@10 -- # set +x 00:06:59.290 ************************************ 00:06:59.290 START TEST accel_dif_generate_copy 00:06:59.290 ************************************ 00:06:59.290 11:44:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:59.290 11:44:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.290 11:44:53 -- accel/accel.sh@17 -- # local accel_module 00:06:59.290 11:44:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:59.290 11:44:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:59.290 11:44:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.290 11:44:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.290 11:44:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.290 11:44:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.290 11:44:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.290 11:44:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.290 11:44:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.290 11:44:53 -- accel/accel.sh@42 -- # jq -r . 00:06:59.550 [2024-06-10 11:44:53.072505] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:59.550 [2024-06-10 11:44:53.072609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748546 ] 00:06:59.550 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.550 [2024-06-10 11:44:53.146947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.551 [2024-06-10 11:44:53.212228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.934 11:44:54 -- accel/accel.sh@18 -- # out=' 00:07:00.934 SPDK Configuration: 00:07:00.934 Core mask: 0x1 00:07:00.934 00:07:00.934 Accel Perf Configuration: 00:07:00.934 Workload Type: dif_generate_copy 00:07:00.934 Vector size: 4096 bytes 00:07:00.934 Transfer size: 4096 bytes 00:07:00.934 Vector count 1 00:07:00.934 Module: software 00:07:00.934 Queue depth: 32 00:07:00.934 Allocate depth: 32 00:07:00.934 # threads/core: 1 00:07:00.934 Run time: 1 seconds 00:07:00.934 Verify: No 00:07:00.934 00:07:00.934 Running for 1 seconds... 00:07:00.934 00:07:00.934 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.934 ------------------------------------------------------------------------------------ 00:07:00.934 0,0 87616/s 347 MiB/s 0 0 00:07:00.934 ==================================================================================== 00:07:00.934 Total 87616/s 342 MiB/s 0 0' 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.934 11:44:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:00.934 11:44:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:00.934 11:44:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.934 11:44:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.934 11:44:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.934 11:44:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.934 11:44:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.934 11:44:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.934 11:44:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.934 11:44:54 -- accel/accel.sh@42 -- # jq -r . 00:07:00.934 [2024-06-10 11:44:54.362983] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:00.934 [2024-06-10 11:44:54.363058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748743 ] 00:07:00.934 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.934 [2024-06-10 11:44:54.424463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.934 [2024-06-10 11:44:54.486941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.934 11:44:54 -- accel/accel.sh@21 -- # val= 00:07:00.934 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.934 11:44:54 -- accel/accel.sh@21 -- # val= 00:07:00.934 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.934 11:44:54 -- accel/accel.sh@21 -- # val=0x1 00:07:00.934 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.934 11:44:54 -- accel/accel.sh@21 -- # val= 00:07:00.934 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.934 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val= 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val= 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val=software 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val=32 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val=32 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val=1 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val=No 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val= 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:00.935 11:44:54 -- accel/accel.sh@21 -- # val= 00:07:00.935 11:44:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # IFS=: 00:07:00.935 11:44:54 -- accel/accel.sh@20 -- # read -r var val 00:07:01.875 11:44:55 -- accel/accel.sh@21 -- # val= 00:07:01.875 11:44:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # IFS=: 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # read -r var val 00:07:01.875 11:44:55 -- accel/accel.sh@21 -- # val= 00:07:01.875 11:44:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # IFS=: 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # read -r var val 00:07:01.875 11:44:55 -- accel/accel.sh@21 -- # val= 00:07:01.875 11:44:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # IFS=: 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # read -r var val 00:07:01.875 11:44:55 -- accel/accel.sh@21 -- # val= 00:07:01.875 11:44:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # IFS=: 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # read -r var val 00:07:01.875 11:44:55 -- accel/accel.sh@21 -- # val= 00:07:01.875 11:44:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # IFS=: 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # read -r var val 00:07:01.875 11:44:55 -- accel/accel.sh@21 -- # val= 00:07:01.875 11:44:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # IFS=: 00:07:01.875 11:44:55 -- accel/accel.sh@20 -- # read -r var val 00:07:01.875 11:44:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.875 11:44:55 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:01.875 11:44:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.875 00:07:01.875 real 0m2.573s 00:07:01.875 user 0m2.373s 00:07:01.875 sys 0m0.206s 00:07:01.875 11:44:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.875 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:01.875 ************************************ 00:07:01.875 END TEST accel_dif_generate_copy 00:07:01.875 ************************************ 00:07:02.136 11:44:55 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:02.136 11:44:55 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.136 11:44:55 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:02.136 11:44:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.136 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:02.136 ************************************ 00:07:02.136 START TEST accel_comp 00:07:02.136 ************************************ 00:07:02.136 11:44:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.136 11:44:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.136 11:44:55 -- accel/accel.sh@17 -- # local accel_module 00:07:02.136 11:44:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.136 11:44:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.136 11:44:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.136 11:44:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.136 11:44:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.136 11:44:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.136 11:44:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.136 11:44:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.136 11:44:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.136 11:44:55 -- accel/accel.sh@42 -- # jq -r . 00:07:02.136 [2024-06-10 11:44:55.686720] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:02.136 [2024-06-10 11:44:55.686822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748930 ] 00:07:02.136 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.136 [2024-06-10 11:44:55.752716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.136 [2024-06-10 11:44:55.818193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.520 11:44:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:03.520 00:07:03.520 SPDK Configuration: 00:07:03.520 Core mask: 0x1 00:07:03.520 00:07:03.520 Accel Perf Configuration: 00:07:03.520 Workload Type: compress 00:07:03.520 Transfer size: 4096 bytes 00:07:03.520 Vector count 1 00:07:03.520 Module: software 00:07:03.520 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.520 Queue depth: 32 00:07:03.520 Allocate depth: 32 00:07:03.520 # threads/core: 1 00:07:03.520 Run time: 1 seconds 00:07:03.520 Verify: No 00:07:03.520 00:07:03.520 Running for 1 seconds... 00:07:03.520 00:07:03.520 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.520 ------------------------------------------------------------------------------------ 00:07:03.520 0,0 47552/s 198 MiB/s 0 0 00:07:03.520 ==================================================================================== 00:07:03.520 Total 47552/s 185 MiB/s 0 0' 00:07:03.520 11:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.520 11:44:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.520 11:44:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.520 11:44:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.520 11:44:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.520 11:44:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.520 11:44:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.520 11:44:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.520 11:44:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.520 11:44:56 -- accel/accel.sh@42 -- # jq -r . 00:07:03.520 [2024-06-10 11:44:56.973601] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:03.520 [2024-06-10 11:44:56.973674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749253 ] 00:07:03.520 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.520 [2024-06-10 11:44:57.034673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.520 [2024-06-10 11:44:57.097847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val= 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val= 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val= 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val=0x1 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val= 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val= 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val=compress 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val= 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val=software 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val=32 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val=32 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val=1 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val=No 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val= 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:03.520 11:44:57 -- accel/accel.sh@21 -- # val= 00:07:03.520 11:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:03.520 11:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:04.490 11:44:58 -- accel/accel.sh@21 -- # val= 00:07:04.490 11:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.490 11:44:58 -- accel/accel.sh@21 -- # val= 00:07:04.490 11:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.490 11:44:58 -- accel/accel.sh@21 -- # val= 00:07:04.490 11:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.490 11:44:58 -- accel/accel.sh@21 -- # val= 00:07:04.490 11:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.490 11:44:58 -- accel/accel.sh@21 -- # val= 00:07:04.490 11:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.490 11:44:58 -- accel/accel.sh@21 -- # val= 00:07:04.490 11:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.490 11:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.491 11:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.491 11:44:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.491 11:44:58 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:04.491 11:44:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.491 00:07:04.491 real 0m2.573s 00:07:04.491 user 0m2.374s 00:07:04.491 sys 0m0.205s 00:07:04.491 11:44:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.491 11:44:58 -- common/autotest_common.sh@10 -- # set +x 00:07:04.491 ************************************ 00:07:04.491 END TEST accel_comp 00:07:04.491 ************************************ 00:07:04.751 11:44:58 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:04.751 11:44:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:04.751 11:44:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.751 11:44:58 -- common/autotest_common.sh@10 -- # set +x 00:07:04.751 ************************************ 00:07:04.751 START TEST accel_decomp 00:07:04.751 ************************************ 00:07:04.751 11:44:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:04.751 11:44:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.751 11:44:58 -- accel/accel.sh@17 -- # local accel_module 00:07:04.751 11:44:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:04.751 11:44:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:04.751 11:44:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.751 11:44:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.751 11:44:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.751 11:44:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.751 11:44:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.751 11:44:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.751 11:44:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.751 11:44:58 -- accel/accel.sh@42 -- # jq -r . 00:07:04.751 [2024-06-10 11:44:58.298375] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:04.751 [2024-06-10 11:44:58.298456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749602 ] 00:07:04.751 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.751 [2024-06-10 11:44:58.360393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.751 [2024-06-10 11:44:58.422843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.134 11:44:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:06.134 00:07:06.134 SPDK Configuration: 00:07:06.134 Core mask: 0x1 00:07:06.134 00:07:06.134 Accel Perf Configuration: 00:07:06.134 Workload Type: decompress 00:07:06.134 Transfer size: 4096 bytes 00:07:06.134 Vector count 1 00:07:06.134 Module: software 00:07:06.134 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.134 Queue depth: 32 00:07:06.134 Allocate depth: 32 00:07:06.134 # threads/core: 1 00:07:06.134 Run time: 1 seconds 00:07:06.134 Verify: Yes 00:07:06.134 00:07:06.134 Running for 1 seconds... 00:07:06.134 00:07:06.134 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.134 ------------------------------------------------------------------------------------ 00:07:06.134 0,0 63168/s 116 MiB/s 0 0 00:07:06.134 ==================================================================================== 00:07:06.134 Total 63168/s 246 MiB/s 0 0' 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.134 11:44:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.134 11:44:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.134 11:44:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.134 11:44:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.134 11:44:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.134 11:44:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.134 11:44:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.134 11:44:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.134 11:44:59 -- accel/accel.sh@42 -- # jq -r . 00:07:06.134 [2024-06-10 11:44:59.576694] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:06.134 [2024-06-10 11:44:59.576783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749895 ] 00:07:06.134 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.134 [2024-06-10 11:44:59.639434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.134 [2024-06-10 11:44:59.701838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val= 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val= 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val= 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val=0x1 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val= 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val= 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val=decompress 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val= 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val=software 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val=32 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val=32 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val=1 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val=Yes 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val= 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:06.134 11:44:59 -- accel/accel.sh@21 -- # val= 00:07:06.134 11:44:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:06.134 11:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:07.074 11:45:00 -- accel/accel.sh@21 -- # val= 00:07:07.074 11:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:07.074 11:45:00 -- accel/accel.sh@21 -- # val= 00:07:07.074 11:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:07.074 11:45:00 -- accel/accel.sh@21 -- # val= 00:07:07.074 11:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:07.074 11:45:00 -- accel/accel.sh@21 -- # val= 00:07:07.074 11:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:07.074 11:45:00 -- accel/accel.sh@21 -- # val= 00:07:07.074 11:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:07.074 11:45:00 -- accel/accel.sh@21 -- # val= 00:07:07.074 11:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:07.074 11:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:07.074 11:45:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.074 11:45:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:07.074 11:45:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.074 00:07:07.074 real 0m2.565s 00:07:07.074 user 0m2.366s 00:07:07.074 sys 0m0.204s 00:07:07.074 11:45:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.074 11:45:00 -- common/autotest_common.sh@10 -- # set +x 00:07:07.074 ************************************ 00:07:07.074 END TEST accel_decomp 00:07:07.074 ************************************ 00:07:07.335 11:45:00 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.335 11:45:00 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:07.335 11:45:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.335 11:45:00 -- common/autotest_common.sh@10 -- # set +x 00:07:07.335 ************************************ 00:07:07.335 START TEST accel_decmop_full 00:07:07.335 ************************************ 00:07:07.335 11:45:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.335 11:45:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.335 11:45:00 -- accel/accel.sh@17 -- # local accel_module 00:07:07.335 11:45:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.335 11:45:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.335 11:45:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.335 11:45:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.335 11:45:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.335 11:45:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.335 11:45:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.335 11:45:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.335 11:45:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.335 11:45:00 -- accel/accel.sh@42 -- # jq -r . 00:07:07.335 [2024-06-10 11:45:00.905760] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:07.335 [2024-06-10 11:45:00.905864] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750122 ] 00:07:07.335 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.335 [2024-06-10 11:45:00.967568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.335 [2024-06-10 11:45:01.029945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.720 11:45:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:08.720 00:07:08.720 SPDK Configuration: 00:07:08.720 Core mask: 0x1 00:07:08.720 00:07:08.720 Accel Perf Configuration: 00:07:08.720 Workload Type: decompress 00:07:08.720 Transfer size: 111250 bytes 00:07:08.720 Vector count 1 00:07:08.720 Module: software 00:07:08.720 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.720 Queue depth: 32 00:07:08.720 Allocate depth: 32 00:07:08.720 # threads/core: 1 00:07:08.720 Run time: 1 seconds 00:07:08.720 Verify: Yes 00:07:08.720 00:07:08.720 Running for 1 seconds... 00:07:08.720 00:07:08.720 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.720 ------------------------------------------------------------------------------------ 00:07:08.720 0,0 4064/s 167 MiB/s 0 0 00:07:08.720 ==================================================================================== 00:07:08.720 Total 4064/s 431 MiB/s 0 0' 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:08.720 11:45:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:08.720 11:45:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.720 11:45:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.720 11:45:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.720 11:45:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.720 11:45:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.720 11:45:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.720 11:45:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.720 11:45:02 -- accel/accel.sh@42 -- # jq -r . 00:07:08.720 [2024-06-10 11:45:02.192181] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:08.720 [2024-06-10 11:45:02.192280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750420 ] 00:07:08.720 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.720 [2024-06-10 11:45:02.254082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.720 [2024-06-10 11:45:02.316326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val= 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val= 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val= 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val=0x1 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val= 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val= 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val=decompress 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val= 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val=software 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val=32 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val=32 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val=1 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val=Yes 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val= 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:08.720 11:45:02 -- accel/accel.sh@21 -- # val= 00:07:08.720 11:45:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # IFS=: 00:07:08.720 11:45:02 -- accel/accel.sh@20 -- # read -r var val 00:07:10.103 11:45:03 -- accel/accel.sh@21 -- # val= 00:07:10.103 11:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:10.103 11:45:03 -- accel/accel.sh@21 -- # val= 00:07:10.103 11:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:10.103 11:45:03 -- accel/accel.sh@21 -- # val= 00:07:10.103 11:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:10.103 11:45:03 -- accel/accel.sh@21 -- # val= 00:07:10.103 11:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:10.103 11:45:03 -- accel/accel.sh@21 -- # val= 00:07:10.103 11:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:10.103 11:45:03 -- accel/accel.sh@21 -- # val= 00:07:10.103 11:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:10.103 11:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:10.103 11:45:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.103 11:45:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:10.103 11:45:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.103 00:07:10.103 real 0m2.581s 00:07:10.103 user 0m2.386s 00:07:10.103 sys 0m0.201s 00:07:10.103 11:45:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.103 11:45:03 -- common/autotest_common.sh@10 -- # set +x 00:07:10.103 ************************************ 00:07:10.103 END TEST accel_decmop_full 00:07:10.103 ************************************ 00:07:10.103 11:45:03 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.103 11:45:03 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:10.103 11:45:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.103 11:45:03 -- common/autotest_common.sh@10 -- # set +x 00:07:10.103 ************************************ 00:07:10.103 START TEST accel_decomp_mcore 00:07:10.103 ************************************ 00:07:10.103 11:45:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.103 11:45:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.103 11:45:03 -- accel/accel.sh@17 -- # local accel_module 00:07:10.103 11:45:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.103 11:45:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.103 11:45:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.103 11:45:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.103 11:45:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.103 11:45:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.103 11:45:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.103 11:45:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.103 11:45:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.103 11:45:03 -- accel/accel.sh@42 -- # jq -r . 00:07:10.103 [2024-06-10 11:45:03.528281] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:10.103 [2024-06-10 11:45:03.528370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750777 ] 00:07:10.103 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.103 [2024-06-10 11:45:03.591786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.103 [2024-06-10 11:45:03.658433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.103 [2024-06-10 11:45:03.658548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.103 [2024-06-10 11:45:03.658702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.103 [2024-06-10 11:45:03.658703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.045 11:45:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:11.045 00:07:11.045 SPDK Configuration: 00:07:11.045 Core mask: 0xf 00:07:11.045 00:07:11.045 Accel Perf Configuration: 00:07:11.045 Workload Type: decompress 00:07:11.045 Transfer size: 4096 bytes 00:07:11.045 Vector count 1 00:07:11.045 Module: software 00:07:11.045 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.045 Queue depth: 32 00:07:11.045 Allocate depth: 32 00:07:11.045 # threads/core: 1 00:07:11.045 Run time: 1 seconds 00:07:11.045 Verify: Yes 00:07:11.045 00:07:11.045 Running for 1 seconds... 00:07:11.045 00:07:11.045 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.045 ------------------------------------------------------------------------------------ 00:07:11.045 0,0 58592/s 107 MiB/s 0 0 00:07:11.045 3,0 58848/s 108 MiB/s 0 0 00:07:11.045 2,0 86496/s 159 MiB/s 0 0 00:07:11.045 1,0 58848/s 108 MiB/s 0 0 00:07:11.045 ==================================================================================== 00:07:11.045 Total 262784/s 1026 MiB/s 0 0' 00:07:11.045 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.046 11:45:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:11.046 11:45:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:11.046 11:45:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.046 11:45:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.046 11:45:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.046 11:45:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.046 11:45:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.046 11:45:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.046 11:45:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.046 11:45:04 -- accel/accel.sh@42 -- # jq -r . 00:07:11.046 [2024-06-10 11:45:04.805665] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:11.046 [2024-06-10 11:45:04.805720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751114 ] 00:07:11.306 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.306 [2024-06-10 11:45:04.864521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.306 [2024-06-10 11:45:04.928950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.306 [2024-06-10 11:45:04.929065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.306 [2024-06-10 11:45:04.929220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.306 [2024-06-10 11:45:04.929221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val= 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val= 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val= 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val=0xf 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val= 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val= 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val=decompress 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val= 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val=software 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val=32 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val=32 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val=1 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val=Yes 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.306 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.306 11:45:04 -- accel/accel.sh@21 -- # val= 00:07:11.306 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.307 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.307 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:11.307 11:45:04 -- accel/accel.sh@21 -- # val= 00:07:11.307 11:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.307 11:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:11.307 11:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.691 11:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.691 11:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.691 11:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.691 11:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.691 11:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.691 11:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.691 11:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.691 11:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.691 11:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.691 11:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.691 11:45:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.691 11:45:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.691 11:45:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.691 00:07:12.691 real 0m2.568s 00:07:12.691 user 0m8.842s 00:07:12.691 sys 0m0.198s 00:07:12.691 11:45:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.691 11:45:06 -- common/autotest_common.sh@10 -- # set +x 00:07:12.691 ************************************ 00:07:12.691 END TEST accel_decomp_mcore 00:07:12.691 ************************************ 00:07:12.691 11:45:06 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.691 11:45:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:12.691 11:45:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.691 11:45:06 -- common/autotest_common.sh@10 -- # set +x 00:07:12.691 ************************************ 00:07:12.691 START TEST accel_decomp_full_mcore 00:07:12.691 ************************************ 00:07:12.691 11:45:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.691 11:45:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.691 11:45:06 -- accel/accel.sh@17 -- # local accel_module 00:07:12.691 11:45:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.691 11:45:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.691 11:45:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.691 11:45:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.691 11:45:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.691 11:45:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.691 11:45:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.691 11:45:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.691 11:45:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.691 11:45:06 -- accel/accel.sh@42 -- # jq -r . 00:07:12.692 [2024-06-10 11:45:06.138732] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:12.692 [2024-06-10 11:45:06.138804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751327 ] 00:07:12.692 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.692 [2024-06-10 11:45:06.201276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.692 [2024-06-10 11:45:06.270188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.692 [2024-06-10 11:45:06.270322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.692 [2024-06-10 11:45:06.270669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.692 [2024-06-10 11:45:06.270670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.076 11:45:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:14.076 00:07:14.076 SPDK Configuration: 00:07:14.076 Core mask: 0xf 00:07:14.076 00:07:14.076 Accel Perf Configuration: 00:07:14.076 Workload Type: decompress 00:07:14.076 Transfer size: 111250 bytes 00:07:14.076 Vector count 1 00:07:14.076 Module: software 00:07:14.076 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:14.076 Queue depth: 32 00:07:14.076 Allocate depth: 32 00:07:14.076 # threads/core: 1 00:07:14.076 Run time: 1 seconds 00:07:14.076 Verify: Yes 00:07:14.076 00:07:14.076 Running for 1 seconds... 00:07:14.076 00:07:14.076 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.076 ------------------------------------------------------------------------------------ 00:07:14.076 0,0 4096/s 169 MiB/s 0 0 00:07:14.076 3,0 4096/s 169 MiB/s 0 0 00:07:14.076 2,0 5952/s 245 MiB/s 0 0 00:07:14.076 1,0 4096/s 169 MiB/s 0 0 00:07:14.076 ==================================================================================== 00:07:14.076 Total 18240/s 1935 MiB/s 0 0' 00:07:14.076 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.076 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.076 11:45:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.076 11:45:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.076 11:45:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.076 11:45:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.076 11:45:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.076 11:45:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.076 11:45:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.076 11:45:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.076 11:45:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.076 11:45:07 -- accel/accel.sh@42 -- # jq -r . 00:07:14.076 [2024-06-10 11:45:07.445252] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:14.076 [2024-06-10 11:45:07.445356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751500 ] 00:07:14.077 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.077 [2024-06-10 11:45:07.508776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.077 [2024-06-10 11:45:07.573709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.077 [2024-06-10 11:45:07.573824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.077 [2024-06-10 11:45:07.573979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.077 [2024-06-10 11:45:07.573979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val=0xf 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val=decompress 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val=software 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val=32 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val=32 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val=1 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val=Yes 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.077 11:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.077 11:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.077 11:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@21 -- # val= 00:07:15.019 11:45:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@21 -- # val= 00:07:15.019 11:45:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@21 -- # val= 00:07:15.019 11:45:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@21 -- # val= 00:07:15.019 11:45:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@21 -- # val= 00:07:15.019 11:45:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@21 -- # val= 00:07:15.019 11:45:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@21 -- # val= 00:07:15.019 11:45:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@21 -- # val= 00:07:15.019 11:45:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@21 -- # val= 00:07:15.019 11:45:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.019 11:45:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.019 11:45:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.019 11:45:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:15.019 11:45:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.019 00:07:15.019 real 0m2.615s 00:07:15.019 user 0m8.961s 00:07:15.019 sys 0m0.203s 00:07:15.019 11:45:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.019 11:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:15.019 ************************************ 00:07:15.019 END TEST accel_decomp_full_mcore 00:07:15.019 ************************************ 00:07:15.019 11:45:08 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.019 11:45:08 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:15.019 11:45:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.019 11:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:15.019 ************************************ 00:07:15.019 START TEST accel_decomp_mthread 00:07:15.019 ************************************ 00:07:15.019 11:45:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.019 11:45:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.019 11:45:08 -- accel/accel.sh@17 -- # local accel_module 00:07:15.019 11:45:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.019 11:45:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.019 11:45:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.019 11:45:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.019 11:45:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.019 11:45:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.019 11:45:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.019 11:45:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.019 11:45:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.019 11:45:08 -- accel/accel.sh@42 -- # jq -r . 00:07:15.280 [2024-06-10 11:45:08.798017] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:15.280 [2024-06-10 11:45:08.798118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751874 ] 00:07:15.280 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.280 [2024-06-10 11:45:08.862096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.280 [2024-06-10 11:45:08.928304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.665 11:45:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.665 00:07:16.665 SPDK Configuration: 00:07:16.665 Core mask: 0x1 00:07:16.665 00:07:16.665 Accel Perf Configuration: 00:07:16.665 Workload Type: decompress 00:07:16.665 Transfer size: 4096 bytes 00:07:16.665 Vector count 1 00:07:16.665 Module: software 00:07:16.665 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.665 Queue depth: 32 00:07:16.665 Allocate depth: 32 00:07:16.665 # threads/core: 2 00:07:16.665 Run time: 1 seconds 00:07:16.665 Verify: Yes 00:07:16.665 00:07:16.665 Running for 1 seconds... 00:07:16.665 00:07:16.665 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.665 ------------------------------------------------------------------------------------ 00:07:16.665 0,1 31808/s 58 MiB/s 0 0 00:07:16.665 0,0 31712/s 58 MiB/s 0 0 00:07:16.665 ==================================================================================== 00:07:16.665 Total 63520/s 248 MiB/s 0 0' 00:07:16.665 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.665 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.665 11:45:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.665 11:45:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.665 11:45:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.665 11:45:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.665 11:45:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.665 11:45:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.665 11:45:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.665 11:45:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.665 11:45:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.665 11:45:10 -- accel/accel.sh@42 -- # jq -r . 00:07:16.665 [2024-06-10 11:45:10.089483] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:16.665 [2024-06-10 11:45:10.089559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752506 ] 00:07:16.665 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.665 [2024-06-10 11:45:10.150495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.666 [2024-06-10 11:45:10.213019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val= 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val= 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val= 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val=0x1 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val= 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val= 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val=decompress 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val= 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val=software 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val=32 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val=32 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val=2 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val=Yes 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val= 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.666 11:45:10 -- accel/accel.sh@21 -- # val= 00:07:16.666 11:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:16.666 11:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.609 11:45:11 -- accel/accel.sh@21 -- # val= 00:07:17.609 11:45:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # IFS=: 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # read -r var val 00:07:17.609 11:45:11 -- accel/accel.sh@21 -- # val= 00:07:17.609 11:45:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # IFS=: 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # read -r var val 00:07:17.609 11:45:11 -- accel/accel.sh@21 -- # val= 00:07:17.609 11:45:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # IFS=: 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # read -r var val 00:07:17.609 11:45:11 -- accel/accel.sh@21 -- # val= 00:07:17.609 11:45:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # IFS=: 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # read -r var val 00:07:17.609 11:45:11 -- accel/accel.sh@21 -- # val= 00:07:17.609 11:45:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # IFS=: 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # read -r var val 00:07:17.609 11:45:11 -- accel/accel.sh@21 -- # val= 00:07:17.609 11:45:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # IFS=: 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # read -r var val 00:07:17.609 11:45:11 -- accel/accel.sh@21 -- # val= 00:07:17.609 11:45:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # IFS=: 00:07:17.609 11:45:11 -- accel/accel.sh@20 -- # read -r var val 00:07:17.609 11:45:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.609 11:45:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.609 11:45:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.609 00:07:17.609 real 0m2.582s 00:07:17.609 user 0m2.385s 00:07:17.609 sys 0m0.201s 00:07:17.609 11:45:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.609 11:45:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.609 ************************************ 00:07:17.609 END TEST accel_decomp_mthread 00:07:17.609 ************************************ 00:07:17.881 11:45:11 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.881 11:45:11 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:17.881 11:45:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.881 11:45:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.881 ************************************ 00:07:17.881 START TEST accel_deomp_full_mthread 00:07:17.881 ************************************ 00:07:17.881 11:45:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.881 11:45:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.881 11:45:11 -- accel/accel.sh@17 -- # local accel_module 00:07:17.882 11:45:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.882 11:45:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.882 11:45:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.882 11:45:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.882 11:45:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.882 11:45:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.882 11:45:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.882 11:45:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.882 11:45:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.882 11:45:11 -- accel/accel.sh@42 -- # jq -r . 00:07:17.882 [2024-06-10 11:45:11.418645] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:17.882 [2024-06-10 11:45:11.418744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752977 ] 00:07:17.882 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.882 [2024-06-10 11:45:11.484235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.882 [2024-06-10 11:45:11.548547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.265 11:45:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.265 00:07:19.265 SPDK Configuration: 00:07:19.265 Core mask: 0x1 00:07:19.265 00:07:19.265 Accel Perf Configuration: 00:07:19.265 Workload Type: decompress 00:07:19.265 Transfer size: 111250 bytes 00:07:19.265 Vector count 1 00:07:19.265 Module: software 00:07:19.265 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.265 Queue depth: 32 00:07:19.265 Allocate depth: 32 00:07:19.265 # threads/core: 2 00:07:19.265 Run time: 1 seconds 00:07:19.265 Verify: Yes 00:07:19.265 00:07:19.265 Running for 1 seconds... 00:07:19.265 00:07:19.265 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.265 ------------------------------------------------------------------------------------ 00:07:19.265 0,1 2112/s 87 MiB/s 0 0 00:07:19.265 0,0 2048/s 84 MiB/s 0 0 00:07:19.265 ==================================================================================== 00:07:19.265 Total 4160/s 441 MiB/s 0 0' 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.265 11:45:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.265 11:45:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.265 11:45:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.265 11:45:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.265 11:45:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.265 11:45:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.265 11:45:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.265 11:45:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.265 11:45:12 -- accel/accel.sh@42 -- # jq -r . 00:07:19.265 [2024-06-10 11:45:12.736296] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:19.265 [2024-06-10 11:45:12.736371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753142 ] 00:07:19.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.265 [2024-06-10 11:45:12.797626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.265 [2024-06-10 11:45:12.859897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val=0x1 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val=decompress 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val=software 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val=32 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val=32 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val=2 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val=Yes 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.265 11:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.265 11:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.265 11:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.650 11:45:14 -- accel/accel.sh@21 -- # val= 00:07:20.650 11:45:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # IFS=: 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # read -r var val 00:07:20.650 11:45:14 -- accel/accel.sh@21 -- # val= 00:07:20.650 11:45:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # IFS=: 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # read -r var val 00:07:20.650 11:45:14 -- accel/accel.sh@21 -- # val= 00:07:20.650 11:45:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # IFS=: 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # read -r var val 00:07:20.650 11:45:14 -- accel/accel.sh@21 -- # val= 00:07:20.650 11:45:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # IFS=: 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # read -r var val 00:07:20.650 11:45:14 -- accel/accel.sh@21 -- # val= 00:07:20.650 11:45:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # IFS=: 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # read -r var val 00:07:20.650 11:45:14 -- accel/accel.sh@21 -- # val= 00:07:20.650 11:45:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # IFS=: 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # read -r var val 00:07:20.650 11:45:14 -- accel/accel.sh@21 -- # val= 00:07:20.650 11:45:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # IFS=: 00:07:20.650 11:45:14 -- accel/accel.sh@20 -- # read -r var val 00:07:20.650 11:45:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.650 11:45:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.650 11:45:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.650 00:07:20.650 real 0m2.631s 00:07:20.650 user 0m2.435s 00:07:20.650 sys 0m0.204s 00:07:20.650 11:45:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.650 11:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:20.650 ************************************ 00:07:20.650 END TEST accel_deomp_full_mthread 00:07:20.650 ************************************ 00:07:20.650 11:45:14 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:20.650 11:45:14 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:20.650 11:45:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:20.650 11:45:14 -- accel/accel.sh@129 -- # build_accel_config 00:07:20.650 11:45:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.650 11:45:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.650 11:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:20.650 11:45:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.650 11:45:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.650 11:45:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.650 11:45:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.650 11:45:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.650 11:45:14 -- accel/accel.sh@42 -- # jq -r . 00:07:20.650 ************************************ 00:07:20.650 START TEST accel_dif_functional_tests 00:07:20.650 ************************************ 00:07:20.650 11:45:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:20.650 [2024-06-10 11:45:14.107989] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:20.650 [2024-06-10 11:45:14.108049] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753360 ] 00:07:20.650 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.650 [2024-06-10 11:45:14.169000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.650 [2024-06-10 11:45:14.235585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.650 [2024-06-10 11:45:14.235701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.650 [2024-06-10 11:45:14.235704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.650 00:07:20.650 00:07:20.650 CUnit - A unit testing framework for C - Version 2.1-3 00:07:20.650 http://cunit.sourceforge.net/ 00:07:20.650 00:07:20.650 00:07:20.650 Suite: accel_dif 00:07:20.650 Test: verify: DIF generated, GUARD check ...passed 00:07:20.650 Test: verify: DIF generated, APPTAG check ...passed 00:07:20.650 Test: verify: DIF generated, REFTAG check ...passed 00:07:20.650 Test: verify: DIF not generated, GUARD check ...[2024-06-10 11:45:14.290753] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:20.650 [2024-06-10 11:45:14.290791] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:20.650 passed 00:07:20.650 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 11:45:14.290821] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:20.650 [2024-06-10 11:45:14.290836] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:20.650 passed 00:07:20.650 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 11:45:14.290853] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:20.650 [2024-06-10 11:45:14.290866] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:20.650 passed 00:07:20.650 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:20.650 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 11:45:14.290909] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:20.650 passed 00:07:20.650 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:20.650 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:20.650 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:20.650 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 11:45:14.291024] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:20.650 passed 00:07:20.650 Test: generate copy: DIF generated, GUARD check ...passed 00:07:20.650 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:20.650 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:20.650 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:20.650 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:20.650 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:20.650 Test: generate copy: iovecs-len validate ...[2024-06-10 11:45:14.291211] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:20.650 passed 00:07:20.650 Test: generate copy: buffer alignment validate ...passed 00:07:20.650 00:07:20.650 Run Summary: Type Total Ran Passed Failed Inactive 00:07:20.650 suites 1 1 n/a 0 0 00:07:20.650 tests 20 20 20 0 0 00:07:20.650 asserts 204 204 204 0 n/a 00:07:20.650 00:07:20.650 Elapsed time = 0.002 seconds 00:07:20.650 00:07:20.650 real 0m0.340s 00:07:20.650 user 0m0.473s 00:07:20.651 sys 0m0.130s 00:07:20.651 11:45:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.651 11:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:20.651 ************************************ 00:07:20.651 END TEST accel_dif_functional_tests 00:07:20.651 ************************************ 00:07:20.912 00:07:20.912 real 0m54.692s 00:07:20.912 user 1m3.308s 00:07:20.912 sys 0m5.521s 00:07:20.912 11:45:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.912 11:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:20.912 ************************************ 00:07:20.912 END TEST accel 00:07:20.912 ************************************ 00:07:20.912 11:45:14 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:20.912 11:45:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.912 11:45:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.912 11:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:20.912 ************************************ 00:07:20.912 START TEST accel_rpc 00:07:20.912 ************************************ 00:07:20.912 11:45:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:20.912 * Looking for test storage... 00:07:20.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:20.912 11:45:14 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:20.912 11:45:14 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1753636 00:07:20.912 11:45:14 -- accel/accel_rpc.sh@15 -- # waitforlisten 1753636 00:07:20.912 11:45:14 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:20.912 11:45:14 -- common/autotest_common.sh@819 -- # '[' -z 1753636 ']' 00:07:20.912 11:45:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.912 11:45:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:20.912 11:45:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.912 11:45:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:20.912 11:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:20.912 [2024-06-10 11:45:14.633869] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:20.912 [2024-06-10 11:45:14.633943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753636 ] 00:07:20.912 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.173 [2024-06-10 11:45:14.700038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.173 [2024-06-10 11:45:14.768705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:21.173 [2024-06-10 11:45:14.768851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.744 11:45:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:21.744 11:45:15 -- common/autotest_common.sh@852 -- # return 0 00:07:21.744 11:45:15 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:21.744 11:45:15 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:21.744 11:45:15 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:21.744 11:45:15 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:21.745 11:45:15 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:21.745 11:45:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:21.745 11:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.745 11:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:21.745 ************************************ 00:07:21.745 START TEST accel_assign_opcode 00:07:21.745 ************************************ 00:07:21.745 11:45:15 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:21.745 11:45:15 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:21.745 11:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:21.745 11:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:21.745 [2024-06-10 11:45:15.414720] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:21.745 11:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:21.745 11:45:15 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:21.745 11:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:21.745 11:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:21.745 [2024-06-10 11:45:15.426746] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:21.745 11:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:21.745 11:45:15 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:21.745 11:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:21.745 11:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:22.006 11:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.006 11:45:15 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:22.006 11:45:15 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:22.006 11:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.006 11:45:15 -- accel/accel_rpc.sh@42 -- # grep software 00:07:22.006 11:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:22.006 11:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.006 software 00:07:22.006 00:07:22.006 real 0m0.215s 00:07:22.006 user 0m0.048s 00:07:22.006 sys 0m0.011s 00:07:22.006 11:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.006 11:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:22.006 ************************************ 00:07:22.006 END TEST accel_assign_opcode 00:07:22.006 ************************************ 00:07:22.006 11:45:15 -- accel/accel_rpc.sh@55 -- # killprocess 1753636 00:07:22.006 11:45:15 -- common/autotest_common.sh@926 -- # '[' -z 1753636 ']' 00:07:22.006 11:45:15 -- common/autotest_common.sh@930 -- # kill -0 1753636 00:07:22.006 11:45:15 -- common/autotest_common.sh@931 -- # uname 00:07:22.006 11:45:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:22.006 11:45:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1753636 00:07:22.006 11:45:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:22.006 11:45:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:22.006 11:45:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1753636' 00:07:22.006 killing process with pid 1753636 00:07:22.006 11:45:15 -- common/autotest_common.sh@945 -- # kill 1753636 00:07:22.006 11:45:15 -- common/autotest_common.sh@950 -- # wait 1753636 00:07:22.267 00:07:22.267 real 0m1.440s 00:07:22.267 user 0m1.514s 00:07:22.267 sys 0m0.388s 00:07:22.267 11:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.267 11:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:22.267 ************************************ 00:07:22.267 END TEST accel_rpc 00:07:22.267 ************************************ 00:07:22.267 11:45:15 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:22.267 11:45:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:22.267 11:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.267 11:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:22.267 ************************************ 00:07:22.267 START TEST app_cmdline 00:07:22.268 ************************************ 00:07:22.268 11:45:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:22.528 * Looking for test storage... 00:07:22.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.528 11:45:16 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.528 11:45:16 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1753938 00:07:22.528 11:45:16 -- app/cmdline.sh@18 -- # waitforlisten 1753938 00:07:22.528 11:45:16 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.528 11:45:16 -- common/autotest_common.sh@819 -- # '[' -z 1753938 ']' 00:07:22.528 11:45:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.528 11:45:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:22.528 11:45:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.528 11:45:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:22.528 11:45:16 -- common/autotest_common.sh@10 -- # set +x 00:07:22.529 [2024-06-10 11:45:16.114852] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:22.529 [2024-06-10 11:45:16.114915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753938 ] 00:07:22.529 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.529 [2024-06-10 11:45:16.177130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.529 [2024-06-10 11:45:16.239201] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.529 [2024-06-10 11:45:16.239347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.470 11:45:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:23.470 11:45:16 -- common/autotest_common.sh@852 -- # return 0 00:07:23.470 11:45:16 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:23.470 { 00:07:23.470 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:07:23.470 "fields": { 00:07:23.470 "major": 24, 00:07:23.470 "minor": 1, 00:07:23.470 "patch": 1, 00:07:23.470 "suffix": "-pre", 00:07:23.470 "commit": "130b9406a" 00:07:23.470 } 00:07:23.470 } 00:07:23.470 11:45:17 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:23.470 11:45:17 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:23.470 11:45:17 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:23.470 11:45:17 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:23.470 11:45:17 -- app/cmdline.sh@26 -- # sort 00:07:23.470 11:45:17 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:23.470 11:45:17 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:23.470 11:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.470 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:23.470 11:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.470 11:45:17 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:23.470 11:45:17 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:23.470 11:45:17 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.470 11:45:17 -- common/autotest_common.sh@640 -- # local es=0 00:07:23.470 11:45:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.470 11:45:17 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.470 11:45:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:23.470 11:45:17 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.470 11:45:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:23.470 11:45:17 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.470 11:45:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:23.470 11:45:17 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.470 11:45:17 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:23.470 11:45:17 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.731 request: 00:07:23.731 { 00:07:23.731 "method": "env_dpdk_get_mem_stats", 00:07:23.731 "req_id": 1 00:07:23.731 } 00:07:23.731 Got JSON-RPC error response 00:07:23.731 response: 00:07:23.731 { 00:07:23.731 "code": -32601, 00:07:23.731 "message": "Method not found" 00:07:23.731 } 00:07:23.731 11:45:17 -- common/autotest_common.sh@643 -- # es=1 00:07:23.731 11:45:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:23.731 11:45:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:23.731 11:45:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:23.731 11:45:17 -- app/cmdline.sh@1 -- # killprocess 1753938 00:07:23.731 11:45:17 -- common/autotest_common.sh@926 -- # '[' -z 1753938 ']' 00:07:23.731 11:45:17 -- common/autotest_common.sh@930 -- # kill -0 1753938 00:07:23.731 11:45:17 -- common/autotest_common.sh@931 -- # uname 00:07:23.731 11:45:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:23.731 11:45:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1753938 00:07:23.731 11:45:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:23.731 11:45:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:23.731 11:45:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1753938' 00:07:23.731 killing process with pid 1753938 00:07:23.731 11:45:17 -- common/autotest_common.sh@945 -- # kill 1753938 00:07:23.731 11:45:17 -- common/autotest_common.sh@950 -- # wait 1753938 00:07:23.992 00:07:23.992 real 0m1.572s 00:07:23.992 user 0m1.914s 00:07:23.992 sys 0m0.390s 00:07:23.992 11:45:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.992 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:23.992 ************************************ 00:07:23.992 END TEST app_cmdline 00:07:23.992 ************************************ 00:07:23.992 11:45:17 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:23.992 11:45:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:23.992 11:45:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.992 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:23.992 ************************************ 00:07:23.992 START TEST version 00:07:23.992 ************************************ 00:07:23.992 11:45:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:23.992 * Looking for test storage... 00:07:23.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.992 11:45:17 -- app/version.sh@17 -- # get_header_version major 00:07:23.992 11:45:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.992 11:45:17 -- app/version.sh@14 -- # cut -f2 00:07:23.992 11:45:17 -- app/version.sh@14 -- # tr -d '"' 00:07:23.992 11:45:17 -- app/version.sh@17 -- # major=24 00:07:23.992 11:45:17 -- app/version.sh@18 -- # get_header_version minor 00:07:23.992 11:45:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.992 11:45:17 -- app/version.sh@14 -- # cut -f2 00:07:23.992 11:45:17 -- app/version.sh@14 -- # tr -d '"' 00:07:23.992 11:45:17 -- app/version.sh@18 -- # minor=1 00:07:23.992 11:45:17 -- app/version.sh@19 -- # get_header_version patch 00:07:23.992 11:45:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.992 11:45:17 -- app/version.sh@14 -- # cut -f2 00:07:23.992 11:45:17 -- app/version.sh@14 -- # tr -d '"' 00:07:23.992 11:45:17 -- app/version.sh@19 -- # patch=1 00:07:23.992 11:45:17 -- app/version.sh@20 -- # get_header_version suffix 00:07:23.992 11:45:17 -- app/version.sh@14 -- # cut -f2 00:07:23.992 11:45:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.992 11:45:17 -- app/version.sh@14 -- # tr -d '"' 00:07:23.992 11:45:17 -- app/version.sh@20 -- # suffix=-pre 00:07:23.992 11:45:17 -- app/version.sh@22 -- # version=24.1 00:07:23.992 11:45:17 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:23.992 11:45:17 -- app/version.sh@25 -- # version=24.1.1 00:07:23.992 11:45:17 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:23.992 11:45:17 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.992 11:45:17 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:23.992 11:45:17 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:23.992 11:45:17 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:23.992 00:07:23.992 real 0m0.173s 00:07:23.992 user 0m0.103s 00:07:23.992 sys 0m0.108s 00:07:23.992 11:45:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.992 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:23.992 ************************************ 00:07:23.992 END TEST version 00:07:23.992 ************************************ 00:07:24.254 11:45:17 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:24.254 11:45:17 -- spdk/autotest.sh@204 -- # uname -s 00:07:24.254 11:45:17 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:24.254 11:45:17 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:24.254 11:45:17 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:24.254 11:45:17 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:24.254 11:45:17 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:24.254 11:45:17 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:24.254 11:45:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:24.254 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.254 11:45:17 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:24.254 11:45:17 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:24.254 11:45:17 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:24.254 11:45:17 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:24.254 11:45:17 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:24.254 11:45:17 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:24.254 11:45:17 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.254 11:45:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:24.254 11:45:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.254 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.254 ************************************ 00:07:24.254 START TEST nvmf_tcp 00:07:24.254 ************************************ 00:07:24.254 11:45:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.254 * Looking for test storage... 00:07:24.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:24.254 11:45:17 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:24.254 11:45:17 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:24.254 11:45:17 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.254 11:45:17 -- nvmf/common.sh@7 -- # uname -s 00:07:24.254 11:45:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.254 11:45:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.254 11:45:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.254 11:45:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.254 11:45:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.254 11:45:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.254 11:45:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.254 11:45:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.254 11:45:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.254 11:45:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.254 11:45:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.254 11:45:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.255 11:45:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.255 11:45:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.255 11:45:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.255 11:45:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.255 11:45:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.255 11:45:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.255 11:45:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.255 11:45:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.255 11:45:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.255 11:45:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.255 11:45:17 -- paths/export.sh@5 -- # export PATH 00:07:24.255 11:45:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.255 11:45:17 -- nvmf/common.sh@46 -- # : 0 00:07:24.255 11:45:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:24.255 11:45:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:24.255 11:45:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:24.255 11:45:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.255 11:45:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.255 11:45:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:24.255 11:45:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:24.255 11:45:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:24.255 11:45:17 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:24.255 11:45:17 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:24.255 11:45:17 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:24.255 11:45:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:24.255 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 11:45:17 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:24.255 11:45:17 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:24.255 11:45:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:24.255 11:45:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.255 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 ************************************ 00:07:24.255 START TEST nvmf_example 00:07:24.255 ************************************ 00:07:24.255 11:45:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:24.517 * Looking for test storage... 00:07:24.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.517 11:45:18 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.517 11:45:18 -- nvmf/common.sh@7 -- # uname -s 00:07:24.517 11:45:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.517 11:45:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.517 11:45:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.517 11:45:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.517 11:45:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.517 11:45:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.517 11:45:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.517 11:45:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.517 11:45:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.517 11:45:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.517 11:45:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.517 11:45:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.517 11:45:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.517 11:45:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.517 11:45:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.517 11:45:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.517 11:45:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.517 11:45:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.517 11:45:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.517 11:45:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.517 11:45:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.517 11:45:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.517 11:45:18 -- paths/export.sh@5 -- # export PATH 00:07:24.517 11:45:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.517 11:45:18 -- nvmf/common.sh@46 -- # : 0 00:07:24.517 11:45:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:24.517 11:45:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:24.517 11:45:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:24.517 11:45:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.517 11:45:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.517 11:45:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:24.517 11:45:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:24.517 11:45:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:24.517 11:45:18 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:24.517 11:45:18 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:24.517 11:45:18 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:24.517 11:45:18 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:24.517 11:45:18 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:24.517 11:45:18 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:24.517 11:45:18 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:24.517 11:45:18 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:24.517 11:45:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:24.517 11:45:18 -- common/autotest_common.sh@10 -- # set +x 00:07:24.517 11:45:18 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:24.517 11:45:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:24.517 11:45:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.517 11:45:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:24.517 11:45:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:24.517 11:45:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:24.517 11:45:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.517 11:45:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.517 11:45:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.517 11:45:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:24.517 11:45:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:24.517 11:45:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:24.517 11:45:18 -- common/autotest_common.sh@10 -- # set +x 00:07:32.749 11:45:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:32.749 11:45:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:32.749 11:45:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:32.749 11:45:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:32.749 11:45:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:32.749 11:45:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:32.749 11:45:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:32.749 11:45:24 -- nvmf/common.sh@294 -- # net_devs=() 00:07:32.749 11:45:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:32.749 11:45:24 -- nvmf/common.sh@295 -- # e810=() 00:07:32.749 11:45:24 -- nvmf/common.sh@295 -- # local -ga e810 00:07:32.749 11:45:24 -- nvmf/common.sh@296 -- # x722=() 00:07:32.749 11:45:24 -- nvmf/common.sh@296 -- # local -ga x722 00:07:32.749 11:45:24 -- nvmf/common.sh@297 -- # mlx=() 00:07:32.749 11:45:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:32.749 11:45:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.749 11:45:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:32.749 11:45:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:32.749 11:45:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:32.749 11:45:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:32.749 11:45:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:32.749 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:32.749 11:45:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:32.749 11:45:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:32.749 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:32.749 11:45:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:32.749 11:45:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:32.749 11:45:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:32.749 11:45:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.749 11:45:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:32.749 11:45:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.749 11:45:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:32.749 Found net devices under 0000:31:00.0: cvl_0_0 00:07:32.749 11:45:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.749 11:45:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:32.749 11:45:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.749 11:45:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:32.749 11:45:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.749 11:45:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:32.749 Found net devices under 0000:31:00.1: cvl_0_1 00:07:32.749 11:45:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.749 11:45:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:32.749 11:45:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:32.749 11:45:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:32.749 11:45:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:32.749 11:45:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:32.749 11:45:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.749 11:45:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.749 11:45:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.749 11:45:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:32.749 11:45:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.749 11:45:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.749 11:45:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:32.749 11:45:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.749 11:45:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.749 11:45:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:32.749 11:45:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:32.749 11:45:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.749 11:45:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.749 11:45:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.749 11:45:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.749 11:45:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:32.749 11:45:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.749 11:45:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.749 11:45:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.749 11:45:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:32.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:07:32.749 00:07:32.749 --- 10.0.0.2 ping statistics --- 00:07:32.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.749 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:07:32.749 11:45:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:07:32.749 00:07:32.749 --- 10.0.0.1 ping statistics --- 00:07:32.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.749 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:07:32.749 11:45:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.749 11:45:25 -- nvmf/common.sh@410 -- # return 0 00:07:32.749 11:45:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:32.749 11:45:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.749 11:45:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:32.749 11:45:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:32.749 11:45:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.749 11:45:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:32.749 11:45:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:32.749 11:45:25 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:32.749 11:45:25 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:32.750 11:45:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:32.750 11:45:25 -- common/autotest_common.sh@10 -- # set +x 00:07:32.750 11:45:25 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:32.750 11:45:25 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:32.750 11:45:25 -- target/nvmf_example.sh@34 -- # nvmfpid=1758322 00:07:32.750 11:45:25 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.750 11:45:25 -- target/nvmf_example.sh@36 -- # waitforlisten 1758322 00:07:32.750 11:45:25 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:32.750 11:45:25 -- common/autotest_common.sh@819 -- # '[' -z 1758322 ']' 00:07:32.750 11:45:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.750 11:45:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:32.750 11:45:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.750 11:45:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:32.750 11:45:25 -- common/autotest_common.sh@10 -- # set +x 00:07:32.750 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.750 11:45:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:32.750 11:45:26 -- common/autotest_common.sh@852 -- # return 0 00:07:32.750 11:45:26 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:32.750 11:45:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:32.750 11:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:32.750 11:45:26 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.750 11:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.750 11:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:32.750 11:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.750 11:45:26 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:32.750 11:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.750 11:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:32.750 11:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.750 11:45:26 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:32.750 11:45:26 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:32.750 11:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.750 11:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:32.750 11:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.750 11:45:26 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:32.750 11:45:26 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:32.750 11:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.750 11:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:32.750 11:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.750 11:45:26 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.750 11:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.750 11:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:32.750 11:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.750 11:45:26 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:32.750 11:45:26 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:32.750 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.977 Initializing NVMe Controllers 00:07:44.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:44.977 Initialization complete. Launching workers. 00:07:44.977 ======================================================== 00:07:44.977 Latency(us) 00:07:44.977 Device Information : IOPS MiB/s Average min max 00:07:44.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18192.00 71.06 3517.64 811.20 16614.96 00:07:44.977 ======================================================== 00:07:44.977 Total : 18192.00 71.06 3517.64 811.20 16614.96 00:07:44.977 00:07:44.977 11:45:36 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:44.977 11:45:36 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:44.977 11:45:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:44.977 11:45:36 -- nvmf/common.sh@116 -- # sync 00:07:44.977 11:45:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:44.977 11:45:36 -- nvmf/common.sh@119 -- # set +e 00:07:44.977 11:45:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:44.977 11:45:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:44.977 rmmod nvme_tcp 00:07:44.977 rmmod nvme_fabrics 00:07:44.977 rmmod nvme_keyring 00:07:44.977 11:45:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:44.977 11:45:36 -- nvmf/common.sh@123 -- # set -e 00:07:44.977 11:45:36 -- nvmf/common.sh@124 -- # return 0 00:07:44.977 11:45:36 -- nvmf/common.sh@477 -- # '[' -n 1758322 ']' 00:07:44.977 11:45:36 -- nvmf/common.sh@478 -- # killprocess 1758322 00:07:44.977 11:45:36 -- common/autotest_common.sh@926 -- # '[' -z 1758322 ']' 00:07:44.977 11:45:36 -- common/autotest_common.sh@930 -- # kill -0 1758322 00:07:44.977 11:45:36 -- common/autotest_common.sh@931 -- # uname 00:07:44.977 11:45:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:44.977 11:45:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1758322 00:07:44.977 11:45:36 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:44.977 11:45:36 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:44.977 11:45:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1758322' 00:07:44.977 killing process with pid 1758322 00:07:44.977 11:45:36 -- common/autotest_common.sh@945 -- # kill 1758322 00:07:44.977 11:45:36 -- common/autotest_common.sh@950 -- # wait 1758322 00:07:44.977 nvmf threads initialize successfully 00:07:44.977 bdev subsystem init successfully 00:07:44.977 created a nvmf target service 00:07:44.977 create targets's poll groups done 00:07:44.977 all subsystems of target started 00:07:44.977 nvmf target is running 00:07:44.977 all subsystems of target stopped 00:07:44.978 destroy targets's poll groups done 00:07:44.978 destroyed the nvmf target service 00:07:44.978 bdev subsystem finish successfully 00:07:44.978 nvmf threads destroy successfully 00:07:44.978 11:45:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:44.978 11:45:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:44.978 11:45:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:44.978 11:45:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.978 11:45:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:44.978 11:45:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.978 11:45:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.978 11:45:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.238 11:45:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:45.238 11:45:38 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:45.238 11:45:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:45.238 11:45:38 -- common/autotest_common.sh@10 -- # set +x 00:07:45.238 00:07:45.238 real 0m20.936s 00:07:45.238 user 0m46.622s 00:07:45.238 sys 0m6.415s 00:07:45.238 11:45:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.238 11:45:38 -- common/autotest_common.sh@10 -- # set +x 00:07:45.238 ************************************ 00:07:45.238 END TEST nvmf_example 00:07:45.238 ************************************ 00:07:45.238 11:45:38 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:45.238 11:45:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:45.238 11:45:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.238 11:45:38 -- common/autotest_common.sh@10 -- # set +x 00:07:45.238 ************************************ 00:07:45.238 START TEST nvmf_filesystem 00:07:45.238 ************************************ 00:07:45.238 11:45:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:45.502 * Looking for test storage... 00:07:45.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.502 11:45:39 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:45.502 11:45:39 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:45.502 11:45:39 -- common/autotest_common.sh@34 -- # set -e 00:07:45.502 11:45:39 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:45.502 11:45:39 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:45.502 11:45:39 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:45.502 11:45:39 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:45.502 11:45:39 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:45.502 11:45:39 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:45.502 11:45:39 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:45.502 11:45:39 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:45.502 11:45:39 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:45.502 11:45:39 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:45.502 11:45:39 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:45.502 11:45:39 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:45.502 11:45:39 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:45.502 11:45:39 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:45.502 11:45:39 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:45.502 11:45:39 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:45.502 11:45:39 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:45.502 11:45:39 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:45.502 11:45:39 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:45.502 11:45:39 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:45.502 11:45:39 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:45.502 11:45:39 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:45.502 11:45:39 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:45.502 11:45:39 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:45.502 11:45:39 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:45.502 11:45:39 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:45.502 11:45:39 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:45.502 11:45:39 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:45.502 11:45:39 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:45.502 11:45:39 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:45.502 11:45:39 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:45.502 11:45:39 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:45.502 11:45:39 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:45.502 11:45:39 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:45.502 11:45:39 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:45.502 11:45:39 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:45.502 11:45:39 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:45.502 11:45:39 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:45.502 11:45:39 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:45.502 11:45:39 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:45.502 11:45:39 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:45.502 11:45:39 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:45.502 11:45:39 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:45.502 11:45:39 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:45.502 11:45:39 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:45.502 11:45:39 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:45.502 11:45:39 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:45.502 11:45:39 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:45.502 11:45:39 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:45.502 11:45:39 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:45.502 11:45:39 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:45.502 11:45:39 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:45.502 11:45:39 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:45.502 11:45:39 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:45.502 11:45:39 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:45.502 11:45:39 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:45.502 11:45:39 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:45.502 11:45:39 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:45.502 11:45:39 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:45.502 11:45:39 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:45.502 11:45:39 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:45.502 11:45:39 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:45.502 11:45:39 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:45.502 11:45:39 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:45.502 11:45:39 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:45.502 11:45:39 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:45.502 11:45:39 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:45.502 11:45:39 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:45.502 11:45:39 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:45.502 11:45:39 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:45.502 11:45:39 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:45.502 11:45:39 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:45.502 11:45:39 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:45.502 11:45:39 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:45.502 11:45:39 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:45.502 11:45:39 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:45.502 11:45:39 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:45.502 11:45:39 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:45.502 11:45:39 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:45.502 11:45:39 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:45.502 11:45:39 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:45.502 11:45:39 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:45.502 11:45:39 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:45.502 11:45:39 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:45.502 11:45:39 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:45.502 11:45:39 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:45.502 11:45:39 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:45.502 11:45:39 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:45.502 11:45:39 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:45.502 11:45:39 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:45.502 11:45:39 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:45.502 11:45:39 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:45.502 11:45:39 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:45.502 11:45:39 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:45.502 11:45:39 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:45.502 11:45:39 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:45.502 11:45:39 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:45.502 11:45:39 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:45.502 11:45:39 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:45.502 #define SPDK_CONFIG_H 00:07:45.502 #define SPDK_CONFIG_APPS 1 00:07:45.502 #define SPDK_CONFIG_ARCH native 00:07:45.502 #undef SPDK_CONFIG_ASAN 00:07:45.502 #undef SPDK_CONFIG_AVAHI 00:07:45.503 #undef SPDK_CONFIG_CET 00:07:45.503 #define SPDK_CONFIG_COVERAGE 1 00:07:45.503 #define SPDK_CONFIG_CROSS_PREFIX 00:07:45.503 #undef SPDK_CONFIG_CRYPTO 00:07:45.503 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:45.503 #undef SPDK_CONFIG_CUSTOMOCF 00:07:45.503 #undef SPDK_CONFIG_DAOS 00:07:45.503 #define SPDK_CONFIG_DAOS_DIR 00:07:45.503 #define SPDK_CONFIG_DEBUG 1 00:07:45.503 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:45.503 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:45.503 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:45.503 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:45.503 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:45.503 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:45.503 #define SPDK_CONFIG_EXAMPLES 1 00:07:45.503 #undef SPDK_CONFIG_FC 00:07:45.503 #define SPDK_CONFIG_FC_PATH 00:07:45.503 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:45.503 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:45.503 #undef SPDK_CONFIG_FUSE 00:07:45.503 #undef SPDK_CONFIG_FUZZER 00:07:45.503 #define SPDK_CONFIG_FUZZER_LIB 00:07:45.503 #undef SPDK_CONFIG_GOLANG 00:07:45.503 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:45.503 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:45.503 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:45.503 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:45.503 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:45.503 #define SPDK_CONFIG_IDXD 1 00:07:45.503 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:45.503 #undef SPDK_CONFIG_IPSEC_MB 00:07:45.503 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:45.503 #define SPDK_CONFIG_ISAL 1 00:07:45.503 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:45.503 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:45.503 #define SPDK_CONFIG_LIBDIR 00:07:45.503 #undef SPDK_CONFIG_LTO 00:07:45.503 #define SPDK_CONFIG_MAX_LCORES 00:07:45.503 #define SPDK_CONFIG_NVME_CUSE 1 00:07:45.503 #undef SPDK_CONFIG_OCF 00:07:45.503 #define SPDK_CONFIG_OCF_PATH 00:07:45.503 #define SPDK_CONFIG_OPENSSL_PATH 00:07:45.503 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:45.503 #undef SPDK_CONFIG_PGO_USE 00:07:45.503 #define SPDK_CONFIG_PREFIX /usr/local 00:07:45.503 #undef SPDK_CONFIG_RAID5F 00:07:45.503 #undef SPDK_CONFIG_RBD 00:07:45.503 #define SPDK_CONFIG_RDMA 1 00:07:45.503 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:45.503 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:45.503 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:45.503 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:45.503 #define SPDK_CONFIG_SHARED 1 00:07:45.503 #undef SPDK_CONFIG_SMA 00:07:45.503 #define SPDK_CONFIG_TESTS 1 00:07:45.503 #undef SPDK_CONFIG_TSAN 00:07:45.503 #define SPDK_CONFIG_UBLK 1 00:07:45.503 #define SPDK_CONFIG_UBSAN 1 00:07:45.503 #undef SPDK_CONFIG_UNIT_TESTS 00:07:45.503 #undef SPDK_CONFIG_URING 00:07:45.503 #define SPDK_CONFIG_URING_PATH 00:07:45.503 #undef SPDK_CONFIG_URING_ZNS 00:07:45.503 #undef SPDK_CONFIG_USDT 00:07:45.503 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:45.503 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:45.503 #undef SPDK_CONFIG_VFIO_USER 00:07:45.503 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:45.503 #define SPDK_CONFIG_VHOST 1 00:07:45.503 #define SPDK_CONFIG_VIRTIO 1 00:07:45.503 #undef SPDK_CONFIG_VTUNE 00:07:45.503 #define SPDK_CONFIG_VTUNE_DIR 00:07:45.503 #define SPDK_CONFIG_WERROR 1 00:07:45.503 #define SPDK_CONFIG_WPDK_DIR 00:07:45.503 #undef SPDK_CONFIG_XNVME 00:07:45.503 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:45.503 11:45:39 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:45.503 11:45:39 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.503 11:45:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.503 11:45:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.503 11:45:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.503 11:45:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.503 11:45:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.503 11:45:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.503 11:45:39 -- paths/export.sh@5 -- # export PATH 00:07:45.503 11:45:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.503 11:45:39 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.503 11:45:39 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.503 11:45:39 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:45.503 11:45:39 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:45.503 11:45:39 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:45.503 11:45:39 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:45.503 11:45:39 -- pm/common@16 -- # TEST_TAG=N/A 00:07:45.503 11:45:39 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:45.503 11:45:39 -- common/autotest_common.sh@52 -- # : 1 00:07:45.503 11:45:39 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:45.503 11:45:39 -- common/autotest_common.sh@56 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:45.503 11:45:39 -- common/autotest_common.sh@58 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:45.503 11:45:39 -- common/autotest_common.sh@60 -- # : 1 00:07:45.503 11:45:39 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:45.503 11:45:39 -- common/autotest_common.sh@62 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:45.503 11:45:39 -- common/autotest_common.sh@64 -- # : 00:07:45.503 11:45:39 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:45.503 11:45:39 -- common/autotest_common.sh@66 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:45.503 11:45:39 -- common/autotest_common.sh@68 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:45.503 11:45:39 -- common/autotest_common.sh@70 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:45.503 11:45:39 -- common/autotest_common.sh@72 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:45.503 11:45:39 -- common/autotest_common.sh@74 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:45.503 11:45:39 -- common/autotest_common.sh@76 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:45.503 11:45:39 -- common/autotest_common.sh@78 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:45.503 11:45:39 -- common/autotest_common.sh@80 -- # : 1 00:07:45.503 11:45:39 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:45.503 11:45:39 -- common/autotest_common.sh@82 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:45.503 11:45:39 -- common/autotest_common.sh@84 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:45.503 11:45:39 -- common/autotest_common.sh@86 -- # : 1 00:07:45.503 11:45:39 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:45.503 11:45:39 -- common/autotest_common.sh@88 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:45.503 11:45:39 -- common/autotest_common.sh@90 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:45.503 11:45:39 -- common/autotest_common.sh@92 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:45.503 11:45:39 -- common/autotest_common.sh@94 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:45.503 11:45:39 -- common/autotest_common.sh@96 -- # : tcp 00:07:45.503 11:45:39 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:45.503 11:45:39 -- common/autotest_common.sh@98 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:45.503 11:45:39 -- common/autotest_common.sh@100 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:45.503 11:45:39 -- common/autotest_common.sh@102 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:45.503 11:45:39 -- common/autotest_common.sh@104 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:45.503 11:45:39 -- common/autotest_common.sh@106 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:45.503 11:45:39 -- common/autotest_common.sh@108 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:45.503 11:45:39 -- common/autotest_common.sh@110 -- # : 0 00:07:45.503 11:45:39 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:45.504 11:45:39 -- common/autotest_common.sh@112 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:45.504 11:45:39 -- common/autotest_common.sh@114 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:45.504 11:45:39 -- common/autotest_common.sh@116 -- # : 1 00:07:45.504 11:45:39 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:45.504 11:45:39 -- common/autotest_common.sh@118 -- # : 00:07:45.504 11:45:39 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:45.504 11:45:39 -- common/autotest_common.sh@120 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:45.504 11:45:39 -- common/autotest_common.sh@122 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:45.504 11:45:39 -- common/autotest_common.sh@124 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:45.504 11:45:39 -- common/autotest_common.sh@126 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:45.504 11:45:39 -- common/autotest_common.sh@128 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:45.504 11:45:39 -- common/autotest_common.sh@130 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:45.504 11:45:39 -- common/autotest_common.sh@132 -- # : 00:07:45.504 11:45:39 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:45.504 11:45:39 -- common/autotest_common.sh@134 -- # : true 00:07:45.504 11:45:39 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:45.504 11:45:39 -- common/autotest_common.sh@136 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:45.504 11:45:39 -- common/autotest_common.sh@138 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:45.504 11:45:39 -- common/autotest_common.sh@140 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:45.504 11:45:39 -- common/autotest_common.sh@142 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:45.504 11:45:39 -- common/autotest_common.sh@144 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:45.504 11:45:39 -- common/autotest_common.sh@146 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:45.504 11:45:39 -- common/autotest_common.sh@148 -- # : e810 00:07:45.504 11:45:39 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:45.504 11:45:39 -- common/autotest_common.sh@150 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:45.504 11:45:39 -- common/autotest_common.sh@152 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:45.504 11:45:39 -- common/autotest_common.sh@154 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:45.504 11:45:39 -- common/autotest_common.sh@156 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:45.504 11:45:39 -- common/autotest_common.sh@158 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:45.504 11:45:39 -- common/autotest_common.sh@160 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:45.504 11:45:39 -- common/autotest_common.sh@163 -- # : 00:07:45.504 11:45:39 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:45.504 11:45:39 -- common/autotest_common.sh@165 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:45.504 11:45:39 -- common/autotest_common.sh@167 -- # : 0 00:07:45.504 11:45:39 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:45.504 11:45:39 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:45.504 11:45:39 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:45.504 11:45:39 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:45.504 11:45:39 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:45.504 11:45:39 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.504 11:45:39 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.504 11:45:39 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.504 11:45:39 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.504 11:45:39 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.504 11:45:39 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.504 11:45:39 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:45.504 11:45:39 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:45.504 11:45:39 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:45.504 11:45:39 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:45.504 11:45:39 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.504 11:45:39 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.504 11:45:39 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.504 11:45:39 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.504 11:45:39 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:45.504 11:45:39 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:45.504 11:45:39 -- common/autotest_common.sh@196 -- # cat 00:07:45.504 11:45:39 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:45.504 11:45:39 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.504 11:45:39 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.504 11:45:39 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.504 11:45:39 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.504 11:45:39 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:45.504 11:45:39 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:45.504 11:45:39 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:45.504 11:45:39 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:45.504 11:45:39 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:45.504 11:45:39 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:45.504 11:45:39 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.504 11:45:39 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.504 11:45:39 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.504 11:45:39 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.504 11:45:39 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.504 11:45:39 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.504 11:45:39 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.504 11:45:39 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.504 11:45:39 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:45.504 11:45:39 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:45.504 11:45:39 -- common/autotest_common.sh@249 -- # valgrind= 00:07:45.504 11:45:39 -- common/autotest_common.sh@255 -- # uname -s 00:07:45.504 11:45:39 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:45.504 11:45:39 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:45.504 11:45:39 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:45.504 11:45:39 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:45.504 11:45:39 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:45.504 11:45:39 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:45.504 11:45:39 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:45.504 11:45:39 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j144 00:07:45.504 11:45:39 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:45.504 11:45:39 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:45.504 11:45:39 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:45.504 11:45:39 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:45.504 11:45:39 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:45.505 11:45:39 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:45.505 11:45:39 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:45.505 11:45:39 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:45.505 11:45:39 -- common/autotest_common.sh@309 -- # [[ -z 1761147 ]] 00:07:45.505 11:45:39 -- common/autotest_common.sh@309 -- # kill -0 1761147 00:07:45.505 11:45:39 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:45.505 11:45:39 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:45.505 11:45:39 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:45.505 11:45:39 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:45.505 11:45:39 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:45.505 11:45:39 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:45.505 11:45:39 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:45.505 11:45:39 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:45.505 11:45:39 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.v5e7od 00:07:45.505 11:45:39 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:45.505 11:45:39 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:45.505 11:45:39 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:45.505 11:45:39 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.v5e7od/tests/target /tmp/spdk.v5e7od 00:07:45.505 11:45:39 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:45.505 11:45:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:45.505 11:45:39 -- common/autotest_common.sh@318 -- # df -T 00:07:45.505 11:45:39 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:45.505 11:45:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:45.505 11:45:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=957403136 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:45.505 11:45:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=4327026688 00:07:45.505 11:45:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=123027148800 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129370996736 00:07:45.505 11:45:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=6343847936 00:07:45.505 11:45:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=64682905600 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685498368 00:07:45.505 11:45:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:07:45.505 11:45:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=25864454144 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25874202624 00:07:45.505 11:45:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=9748480 00:07:45.505 11:45:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=179200 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:07:45.505 11:45:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=324608 00:07:45.505 11:45:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=64684290048 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685498368 00:07:45.505 11:45:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=1208320 00:07:45.505 11:45:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=12937093120 00:07:45.505 11:45:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12937097216 00:07:45.505 11:45:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:45.505 11:45:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:45.505 11:45:39 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:45.505 * Looking for test storage... 00:07:45.505 11:45:39 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:45.505 11:45:39 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:45.505 11:45:39 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.505 11:45:39 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:45.505 11:45:39 -- common/autotest_common.sh@363 -- # mount=/ 00:07:45.505 11:45:39 -- common/autotest_common.sh@365 -- # target_space=123027148800 00:07:45.505 11:45:39 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:45.505 11:45:39 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:45.505 11:45:39 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:45.505 11:45:39 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:45.505 11:45:39 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:45.505 11:45:39 -- common/autotest_common.sh@372 -- # new_size=8558440448 00:07:45.505 11:45:39 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:45.505 11:45:39 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.505 11:45:39 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.505 11:45:39 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.505 11:45:39 -- common/autotest_common.sh@380 -- # return 0 00:07:45.505 11:45:39 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:45.505 11:45:39 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:45.505 11:45:39 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:45.505 11:45:39 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:45.505 11:45:39 -- common/autotest_common.sh@1672 -- # true 00:07:45.505 11:45:39 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:45.505 11:45:39 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:45.505 11:45:39 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:45.505 11:45:39 -- common/autotest_common.sh@27 -- # exec 00:07:45.505 11:45:39 -- common/autotest_common.sh@29 -- # exec 00:07:45.505 11:45:39 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:45.505 11:45:39 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:45.505 11:45:39 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:45.505 11:45:39 -- common/autotest_common.sh@18 -- # set -x 00:07:45.505 11:45:39 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.505 11:45:39 -- nvmf/common.sh@7 -- # uname -s 00:07:45.505 11:45:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.505 11:45:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.505 11:45:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.505 11:45:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.505 11:45:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.505 11:45:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.505 11:45:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.505 11:45:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.505 11:45:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.505 11:45:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.505 11:45:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.505 11:45:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.505 11:45:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.505 11:45:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.505 11:45:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.505 11:45:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.505 11:45:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.505 11:45:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.505 11:45:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.505 11:45:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.505 11:45:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.506 11:45:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.506 11:45:39 -- paths/export.sh@5 -- # export PATH 00:07:45.506 11:45:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.506 11:45:39 -- nvmf/common.sh@46 -- # : 0 00:07:45.506 11:45:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:45.506 11:45:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:45.506 11:45:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:45.506 11:45:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.506 11:45:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.506 11:45:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:45.506 11:45:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:45.506 11:45:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:45.506 11:45:39 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:45.506 11:45:39 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:45.506 11:45:39 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:45.506 11:45:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:45.506 11:45:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.506 11:45:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:45.506 11:45:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:45.506 11:45:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:45.506 11:45:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.506 11:45:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.506 11:45:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.506 11:45:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:45.506 11:45:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:45.506 11:45:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:45.506 11:45:39 -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 11:45:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:53.650 11:45:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:53.650 11:45:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:53.650 11:45:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:53.650 11:45:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:53.650 11:45:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:53.650 11:45:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:53.650 11:45:46 -- nvmf/common.sh@294 -- # net_devs=() 00:07:53.650 11:45:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:53.650 11:45:46 -- nvmf/common.sh@295 -- # e810=() 00:07:53.650 11:45:46 -- nvmf/common.sh@295 -- # local -ga e810 00:07:53.650 11:45:46 -- nvmf/common.sh@296 -- # x722=() 00:07:53.650 11:45:46 -- nvmf/common.sh@296 -- # local -ga x722 00:07:53.650 11:45:46 -- nvmf/common.sh@297 -- # mlx=() 00:07:53.650 11:45:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:53.650 11:45:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.650 11:45:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:53.650 11:45:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:53.650 11:45:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:53.650 11:45:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:53.650 11:45:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:53.650 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:53.650 11:45:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:53.650 11:45:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:53.650 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:53.650 11:45:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:53.650 11:45:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:53.650 11:45:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.650 11:45:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:53.650 11:45:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.650 11:45:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:53.650 Found net devices under 0000:31:00.0: cvl_0_0 00:07:53.650 11:45:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.650 11:45:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:53.650 11:45:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.650 11:45:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:53.650 11:45:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.650 11:45:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:53.650 Found net devices under 0000:31:00.1: cvl_0_1 00:07:53.650 11:45:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.650 11:45:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:53.650 11:45:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:53.650 11:45:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:53.650 11:45:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.650 11:45:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.650 11:45:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.650 11:45:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:53.650 11:45:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.650 11:45:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.650 11:45:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:53.650 11:45:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.650 11:45:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.650 11:45:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:53.650 11:45:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:53.650 11:45:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.650 11:45:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.650 11:45:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.650 11:45:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.650 11:45:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:53.650 11:45:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.650 11:45:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.650 11:45:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.650 11:45:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:53.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:07:53.650 00:07:53.650 --- 10.0.0.2 ping statistics --- 00:07:53.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.650 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:07:53.650 11:45:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:07:53.650 00:07:53.650 --- 10.0.0.1 ping statistics --- 00:07:53.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.650 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:07:53.650 11:45:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.650 11:45:46 -- nvmf/common.sh@410 -- # return 0 00:07:53.650 11:45:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:53.650 11:45:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.650 11:45:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:53.650 11:45:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.650 11:45:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:53.650 11:45:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:53.650 11:45:46 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:53.650 11:45:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:53.650 11:45:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.650 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 ************************************ 00:07:53.650 START TEST nvmf_filesystem_no_in_capsule 00:07:53.650 ************************************ 00:07:53.650 11:45:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:53.650 11:45:46 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:53.650 11:45:46 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:53.651 11:45:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:53.651 11:45:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:53.651 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 11:45:46 -- nvmf/common.sh@469 -- # nvmfpid=1764849 00:07:53.651 11:45:46 -- nvmf/common.sh@470 -- # waitforlisten 1764849 00:07:53.651 11:45:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.651 11:45:46 -- common/autotest_common.sh@819 -- # '[' -z 1764849 ']' 00:07:53.651 11:45:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.651 11:45:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:53.651 11:45:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.651 11:45:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:53.651 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 [2024-06-10 11:45:46.421853] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:53.651 [2024-06-10 11:45:46.421913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.651 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.651 [2024-06-10 11:45:46.492226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.651 [2024-06-10 11:45:46.567174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:53.651 [2024-06-10 11:45:46.567316] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.651 [2024-06-10 11:45:46.567326] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.651 [2024-06-10 11:45:46.567335] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.651 [2024-06-10 11:45:46.567534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.651 [2024-06-10 11:45:46.567679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.651 [2024-06-10 11:45:46.567681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.651 [2024-06-10 11:45:46.567547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.651 11:45:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:53.651 11:45:47 -- common/autotest_common.sh@852 -- # return 0 00:07:53.651 11:45:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:53.651 11:45:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:53.651 11:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 11:45:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.651 11:45:47 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:53.651 11:45:47 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:53.651 11:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.651 11:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 [2024-06-10 11:45:47.243389] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.651 11:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.651 11:45:47 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:53.651 11:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.651 11:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 Malloc1 00:07:53.651 11:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.651 11:45:47 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.651 11:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.651 11:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 11:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.651 11:45:47 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:53.651 11:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.651 11:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 11:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.651 11:45:47 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.651 11:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.651 11:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 [2024-06-10 11:45:47.375486] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.651 11:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.651 11:45:47 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:53.651 11:45:47 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:53.651 11:45:47 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:53.651 11:45:47 -- common/autotest_common.sh@1359 -- # local bs 00:07:53.651 11:45:47 -- common/autotest_common.sh@1360 -- # local nb 00:07:53.651 11:45:47 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:53.651 11:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.651 11:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 11:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.651 11:45:47 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:53.651 { 00:07:53.651 "name": "Malloc1", 00:07:53.651 "aliases": [ 00:07:53.651 "d1a753d6-e3b2-40c4-aefc-a0c933c76610" 00:07:53.651 ], 00:07:53.651 "product_name": "Malloc disk", 00:07:53.651 "block_size": 512, 00:07:53.651 "num_blocks": 1048576, 00:07:53.651 "uuid": "d1a753d6-e3b2-40c4-aefc-a0c933c76610", 00:07:53.651 "assigned_rate_limits": { 00:07:53.651 "rw_ios_per_sec": 0, 00:07:53.651 "rw_mbytes_per_sec": 0, 00:07:53.651 "r_mbytes_per_sec": 0, 00:07:53.651 "w_mbytes_per_sec": 0 00:07:53.651 }, 00:07:53.651 "claimed": true, 00:07:53.651 "claim_type": "exclusive_write", 00:07:53.651 "zoned": false, 00:07:53.651 "supported_io_types": { 00:07:53.651 "read": true, 00:07:53.651 "write": true, 00:07:53.651 "unmap": true, 00:07:53.651 "write_zeroes": true, 00:07:53.651 "flush": true, 00:07:53.651 "reset": true, 00:07:53.651 "compare": false, 00:07:53.651 "compare_and_write": false, 00:07:53.651 "abort": true, 00:07:53.651 "nvme_admin": false, 00:07:53.651 "nvme_io": false 00:07:53.651 }, 00:07:53.651 "memory_domains": [ 00:07:53.651 { 00:07:53.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.651 "dma_device_type": 2 00:07:53.651 } 00:07:53.651 ], 00:07:53.651 "driver_specific": {} 00:07:53.651 } 00:07:53.651 ]' 00:07:53.651 11:45:47 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:53.912 11:45:47 -- common/autotest_common.sh@1362 -- # bs=512 00:07:53.912 11:45:47 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:53.912 11:45:47 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:53.912 11:45:47 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:53.912 11:45:47 -- common/autotest_common.sh@1367 -- # echo 512 00:07:53.912 11:45:47 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:53.912 11:45:47 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.297 11:45:49 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.297 11:45:49 -- common/autotest_common.sh@1177 -- # local i=0 00:07:55.297 11:45:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.297 11:45:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:55.297 11:45:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:57.842 11:45:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:57.842 11:45:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:57.842 11:45:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.842 11:45:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:57.842 11:45:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.842 11:45:51 -- common/autotest_common.sh@1187 -- # return 0 00:07:57.842 11:45:51 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:57.842 11:45:51 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:57.842 11:45:51 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:57.842 11:45:51 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:57.842 11:45:51 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:57.843 11:45:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:57.843 11:45:51 -- setup/common.sh@80 -- # echo 536870912 00:07:57.843 11:45:51 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:57.843 11:45:51 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:57.843 11:45:51 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:57.843 11:45:51 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:57.843 11:45:51 -- target/filesystem.sh@69 -- # partprobe 00:07:58.414 11:45:51 -- target/filesystem.sh@70 -- # sleep 1 00:07:59.357 11:45:52 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:59.357 11:45:52 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:59.357 11:45:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:59.357 11:45:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.357 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:07:59.357 ************************************ 00:07:59.357 START TEST filesystem_ext4 00:07:59.357 ************************************ 00:07:59.357 11:45:52 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:59.357 11:45:52 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:59.357 11:45:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.357 11:45:52 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:59.357 11:45:52 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:59.357 11:45:52 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:59.357 11:45:52 -- common/autotest_common.sh@904 -- # local i=0 00:07:59.357 11:45:52 -- common/autotest_common.sh@905 -- # local force 00:07:59.357 11:45:52 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:59.357 11:45:52 -- common/autotest_common.sh@908 -- # force=-F 00:07:59.357 11:45:52 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:59.357 mke2fs 1.46.5 (30-Dec-2021) 00:07:59.357 Discarding device blocks: 0/522240 done 00:07:59.357 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:59.357 Filesystem UUID: 2dcf00bc-51fa-4ddf-9007-1088d94272cb 00:07:59.357 Superblock backups stored on blocks: 00:07:59.357 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:59.357 00:07:59.357 Allocating group tables: 0/64 done 00:07:59.357 Writing inode tables: 0/64 done 00:08:00.299 Creating journal (8192 blocks): done 00:08:00.299 Writing superblocks and filesystem accounting information: 0/64 done 00:08:00.299 00:08:00.299 11:45:53 -- common/autotest_common.sh@921 -- # return 0 00:08:00.299 11:45:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.559 11:45:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.559 11:45:54 -- target/filesystem.sh@25 -- # sync 00:08:00.559 11:45:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.559 11:45:54 -- target/filesystem.sh@27 -- # sync 00:08:00.559 11:45:54 -- target/filesystem.sh@29 -- # i=0 00:08:00.559 11:45:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.559 11:45:54 -- target/filesystem.sh@37 -- # kill -0 1764849 00:08:00.559 11:45:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.559 11:45:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.559 11:45:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.559 11:45:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.559 00:08:00.559 real 0m1.275s 00:08:00.559 user 0m0.029s 00:08:00.559 sys 0m0.045s 00:08:00.559 11:45:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.559 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:08:00.560 ************************************ 00:08:00.560 END TEST filesystem_ext4 00:08:00.560 ************************************ 00:08:00.560 11:45:54 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:00.560 11:45:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:00.560 11:45:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.560 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:08:00.560 ************************************ 00:08:00.560 START TEST filesystem_btrfs 00:08:00.560 ************************************ 00:08:00.560 11:45:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:00.560 11:45:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:00.560 11:45:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.560 11:45:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:00.560 11:45:54 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:00.560 11:45:54 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:00.560 11:45:54 -- common/autotest_common.sh@904 -- # local i=0 00:08:00.560 11:45:54 -- common/autotest_common.sh@905 -- # local force 00:08:00.560 11:45:54 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:00.560 11:45:54 -- common/autotest_common.sh@910 -- # force=-f 00:08:00.560 11:45:54 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:01.132 btrfs-progs v6.6.2 00:08:01.132 See https://btrfs.readthedocs.io for more information. 00:08:01.132 00:08:01.132 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:01.132 NOTE: several default settings have changed in version 5.15, please make sure 00:08:01.132 this does not affect your deployments: 00:08:01.132 - DUP for metadata (-m dup) 00:08:01.132 - enabled no-holes (-O no-holes) 00:08:01.132 - enabled free-space-tree (-R free-space-tree) 00:08:01.132 00:08:01.132 Label: (null) 00:08:01.132 UUID: 189315a6-4cf5-4ea5-a99f-a1e62fea0424 00:08:01.132 Node size: 16384 00:08:01.132 Sector size: 4096 00:08:01.132 Filesystem size: 510.00MiB 00:08:01.132 Block group profiles: 00:08:01.132 Data: single 8.00MiB 00:08:01.132 Metadata: DUP 32.00MiB 00:08:01.132 System: DUP 8.00MiB 00:08:01.132 SSD detected: yes 00:08:01.132 Zoned device: no 00:08:01.132 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:01.132 Runtime features: free-space-tree 00:08:01.132 Checksum: crc32c 00:08:01.132 Number of devices: 1 00:08:01.132 Devices: 00:08:01.132 ID SIZE PATH 00:08:01.132 1 510.00MiB /dev/nvme0n1p1 00:08:01.132 00:08:01.132 11:45:54 -- common/autotest_common.sh@921 -- # return 0 00:08:01.132 11:45:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:01.704 11:45:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:01.704 11:45:55 -- target/filesystem.sh@25 -- # sync 00:08:01.704 11:45:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:01.704 11:45:55 -- target/filesystem.sh@27 -- # sync 00:08:01.704 11:45:55 -- target/filesystem.sh@29 -- # i=0 00:08:01.704 11:45:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:01.704 11:45:55 -- target/filesystem.sh@37 -- # kill -0 1764849 00:08:01.704 11:45:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:01.704 11:45:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:01.704 11:45:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:01.704 11:45:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:01.704 00:08:01.704 real 0m0.974s 00:08:01.704 user 0m0.025s 00:08:01.704 sys 0m0.065s 00:08:01.704 11:45:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.704 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:08:01.704 ************************************ 00:08:01.704 END TEST filesystem_btrfs 00:08:01.704 ************************************ 00:08:01.704 11:45:55 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:01.704 11:45:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:01.704 11:45:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.704 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:08:01.704 ************************************ 00:08:01.704 START TEST filesystem_xfs 00:08:01.704 ************************************ 00:08:01.704 11:45:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:01.704 11:45:55 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:01.704 11:45:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:01.704 11:45:55 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:01.704 11:45:55 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:01.704 11:45:55 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:01.704 11:45:55 -- common/autotest_common.sh@904 -- # local i=0 00:08:01.704 11:45:55 -- common/autotest_common.sh@905 -- # local force 00:08:01.704 11:45:55 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:01.704 11:45:55 -- common/autotest_common.sh@910 -- # force=-f 00:08:01.704 11:45:55 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:01.704 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:01.704 = sectsz=512 attr=2, projid32bit=1 00:08:01.704 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:01.704 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:01.704 data = bsize=4096 blocks=130560, imaxpct=25 00:08:01.704 = sunit=0 swidth=0 blks 00:08:01.704 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:01.704 log =internal log bsize=4096 blocks=16384, version=2 00:08:01.704 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:01.704 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:02.649 Discarding blocks...Done. 00:08:02.649 11:45:56 -- common/autotest_common.sh@921 -- # return 0 00:08:02.649 11:45:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:05.196 11:45:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.196 11:45:58 -- target/filesystem.sh@25 -- # sync 00:08:05.196 11:45:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.196 11:45:58 -- target/filesystem.sh@27 -- # sync 00:08:05.196 11:45:58 -- target/filesystem.sh@29 -- # i=0 00:08:05.196 11:45:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.196 11:45:58 -- target/filesystem.sh@37 -- # kill -0 1764849 00:08:05.196 11:45:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.196 11:45:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.196 11:45:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.196 11:45:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.196 00:08:05.196 real 0m3.356s 00:08:05.196 user 0m0.019s 00:08:05.196 sys 0m0.060s 00:08:05.196 11:45:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.196 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:05.196 ************************************ 00:08:05.196 END TEST filesystem_xfs 00:08:05.196 ************************************ 00:08:05.196 11:45:58 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:05.457 11:45:58 -- target/filesystem.sh@93 -- # sync 00:08:05.457 11:45:59 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:05.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.457 11:45:59 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:05.457 11:45:59 -- common/autotest_common.sh@1198 -- # local i=0 00:08:05.457 11:45:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:05.457 11:45:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.457 11:45:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:05.457 11:45:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.457 11:45:59 -- common/autotest_common.sh@1210 -- # return 0 00:08:05.457 11:45:59 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.457 11:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.457 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.457 11:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.457 11:45:59 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:05.457 11:45:59 -- target/filesystem.sh@101 -- # killprocess 1764849 00:08:05.457 11:45:59 -- common/autotest_common.sh@926 -- # '[' -z 1764849 ']' 00:08:05.457 11:45:59 -- common/autotest_common.sh@930 -- # kill -0 1764849 00:08:05.457 11:45:59 -- common/autotest_common.sh@931 -- # uname 00:08:05.457 11:45:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:05.457 11:45:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1764849 00:08:05.457 11:45:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:05.457 11:45:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:05.457 11:45:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1764849' 00:08:05.457 killing process with pid 1764849 00:08:05.457 11:45:59 -- common/autotest_common.sh@945 -- # kill 1764849 00:08:05.457 11:45:59 -- common/autotest_common.sh@950 -- # wait 1764849 00:08:05.718 11:45:59 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:05.718 00:08:05.718 real 0m13.054s 00:08:05.718 user 0m51.376s 00:08:05.718 sys 0m1.033s 00:08:05.718 11:45:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.718 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.718 ************************************ 00:08:05.718 END TEST nvmf_filesystem_no_in_capsule 00:08:05.718 ************************************ 00:08:05.718 11:45:59 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:05.718 11:45:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:05.718 11:45:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.718 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.718 ************************************ 00:08:05.718 START TEST nvmf_filesystem_in_capsule 00:08:05.718 ************************************ 00:08:05.718 11:45:59 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:05.718 11:45:59 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:05.719 11:45:59 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:05.719 11:45:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:05.719 11:45:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:05.719 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.719 11:45:59 -- nvmf/common.sh@469 -- # nvmfpid=1767782 00:08:05.719 11:45:59 -- nvmf/common.sh@470 -- # waitforlisten 1767782 00:08:05.719 11:45:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:05.719 11:45:59 -- common/autotest_common.sh@819 -- # '[' -z 1767782 ']' 00:08:05.719 11:45:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.719 11:45:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:05.719 11:45:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.719 11:45:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:05.719 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.979 [2024-06-10 11:45:59.520359] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:05.979 [2024-06-10 11:45:59.520419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.979 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.979 [2024-06-10 11:45:59.586875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.979 [2024-06-10 11:45:59.653229] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:05.979 [2024-06-10 11:45:59.653363] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.979 [2024-06-10 11:45:59.653373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.979 [2024-06-10 11:45:59.653380] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.979 [2024-06-10 11:45:59.653528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.979 [2024-06-10 11:45:59.653627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.979 [2024-06-10 11:45:59.653783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.979 [2024-06-10 11:45:59.653784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.550 11:46:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:06.550 11:46:00 -- common/autotest_common.sh@852 -- # return 0 00:08:06.550 11:46:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:06.550 11:46:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:06.550 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:06.811 11:46:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.811 11:46:00 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:06.811 11:46:00 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:06.811 11:46:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.811 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:06.811 [2024-06-10 11:46:00.341516] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.811 11:46:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.811 11:46:00 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:06.811 11:46:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.811 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:06.811 Malloc1 00:08:06.811 11:46:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.811 11:46:00 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:06.811 11:46:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.811 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:06.811 11:46:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.811 11:46:00 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:06.811 11:46:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.811 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:06.811 11:46:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.811 11:46:00 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.811 11:46:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.811 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:06.811 [2024-06-10 11:46:00.468212] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.811 11:46:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.811 11:46:00 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:06.811 11:46:00 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:06.811 11:46:00 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:06.811 11:46:00 -- common/autotest_common.sh@1359 -- # local bs 00:08:06.811 11:46:00 -- common/autotest_common.sh@1360 -- # local nb 00:08:06.811 11:46:00 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:06.811 11:46:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.811 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:06.811 11:46:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.811 11:46:00 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:06.811 { 00:08:06.811 "name": "Malloc1", 00:08:06.811 "aliases": [ 00:08:06.811 "d04579dc-bfd0-44f1-9362-45783c28d4a1" 00:08:06.811 ], 00:08:06.811 "product_name": "Malloc disk", 00:08:06.811 "block_size": 512, 00:08:06.811 "num_blocks": 1048576, 00:08:06.811 "uuid": "d04579dc-bfd0-44f1-9362-45783c28d4a1", 00:08:06.811 "assigned_rate_limits": { 00:08:06.811 "rw_ios_per_sec": 0, 00:08:06.811 "rw_mbytes_per_sec": 0, 00:08:06.811 "r_mbytes_per_sec": 0, 00:08:06.811 "w_mbytes_per_sec": 0 00:08:06.811 }, 00:08:06.811 "claimed": true, 00:08:06.811 "claim_type": "exclusive_write", 00:08:06.811 "zoned": false, 00:08:06.811 "supported_io_types": { 00:08:06.811 "read": true, 00:08:06.811 "write": true, 00:08:06.811 "unmap": true, 00:08:06.811 "write_zeroes": true, 00:08:06.811 "flush": true, 00:08:06.811 "reset": true, 00:08:06.811 "compare": false, 00:08:06.811 "compare_and_write": false, 00:08:06.811 "abort": true, 00:08:06.811 "nvme_admin": false, 00:08:06.811 "nvme_io": false 00:08:06.811 }, 00:08:06.811 "memory_domains": [ 00:08:06.811 { 00:08:06.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.811 "dma_device_type": 2 00:08:06.811 } 00:08:06.811 ], 00:08:06.811 "driver_specific": {} 00:08:06.811 } 00:08:06.811 ]' 00:08:06.811 11:46:00 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:06.811 11:46:00 -- common/autotest_common.sh@1362 -- # bs=512 00:08:06.811 11:46:00 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:07.073 11:46:00 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:07.073 11:46:00 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:07.073 11:46:00 -- common/autotest_common.sh@1367 -- # echo 512 00:08:07.073 11:46:00 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:07.073 11:46:00 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:08.455 11:46:02 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:08.455 11:46:02 -- common/autotest_common.sh@1177 -- # local i=0 00:08:08.455 11:46:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:08.455 11:46:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:08.455 11:46:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:10.369 11:46:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:10.369 11:46:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:10.369 11:46:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:10.630 11:46:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:10.630 11:46:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:10.630 11:46:04 -- common/autotest_common.sh@1187 -- # return 0 00:08:10.630 11:46:04 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:10.630 11:46:04 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:10.630 11:46:04 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:10.630 11:46:04 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:10.630 11:46:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:10.630 11:46:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:10.630 11:46:04 -- setup/common.sh@80 -- # echo 536870912 00:08:10.630 11:46:04 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:10.630 11:46:04 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:10.630 11:46:04 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:10.630 11:46:04 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:10.891 11:46:04 -- target/filesystem.sh@69 -- # partprobe 00:08:10.891 11:46:04 -- target/filesystem.sh@70 -- # sleep 1 00:08:11.832 11:46:05 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:11.832 11:46:05 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:11.832 11:46:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:11.832 11:46:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.832 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:08:11.832 ************************************ 00:08:11.832 START TEST filesystem_in_capsule_ext4 00:08:11.832 ************************************ 00:08:11.832 11:46:05 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:11.832 11:46:05 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:11.832 11:46:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:11.832 11:46:05 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:11.832 11:46:05 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:11.832 11:46:05 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:11.832 11:46:05 -- common/autotest_common.sh@904 -- # local i=0 00:08:11.832 11:46:05 -- common/autotest_common.sh@905 -- # local force 00:08:11.832 11:46:05 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:11.832 11:46:05 -- common/autotest_common.sh@908 -- # force=-F 00:08:11.832 11:46:05 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:11.832 mke2fs 1.46.5 (30-Dec-2021) 00:08:12.093 Discarding device blocks: 0/522240 done 00:08:12.093 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:12.093 Filesystem UUID: 3aa7620a-6c59-40df-becc-871995cb40bd 00:08:12.093 Superblock backups stored on blocks: 00:08:12.093 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:12.093 00:08:12.093 Allocating group tables: 0/64 done 00:08:12.093 Writing inode tables: 0/64 done 00:08:12.093 Creating journal (8192 blocks): done 00:08:13.298 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:08:13.298 00:08:13.298 11:46:06 -- common/autotest_common.sh@921 -- # return 0 00:08:13.298 11:46:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.870 11:46:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.870 11:46:07 -- target/filesystem.sh@25 -- # sync 00:08:13.870 11:46:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.870 11:46:07 -- target/filesystem.sh@27 -- # sync 00:08:13.870 11:46:07 -- target/filesystem.sh@29 -- # i=0 00:08:13.870 11:46:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.870 11:46:07 -- target/filesystem.sh@37 -- # kill -0 1767782 00:08:13.870 11:46:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.870 11:46:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.870 11:46:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.870 11:46:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.870 00:08:13.870 real 0m1.864s 00:08:13.870 user 0m0.026s 00:08:13.870 sys 0m0.050s 00:08:13.870 11:46:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.870 11:46:07 -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 ************************************ 00:08:13.870 END TEST filesystem_in_capsule_ext4 00:08:13.870 ************************************ 00:08:13.870 11:46:07 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:13.870 11:46:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:13.870 11:46:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.870 11:46:07 -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 ************************************ 00:08:13.870 START TEST filesystem_in_capsule_btrfs 00:08:13.870 ************************************ 00:08:13.870 11:46:07 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:13.870 11:46:07 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:13.870 11:46:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.870 11:46:07 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:13.870 11:46:07 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:13.870 11:46:07 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:13.870 11:46:07 -- common/autotest_common.sh@904 -- # local i=0 00:08:13.870 11:46:07 -- common/autotest_common.sh@905 -- # local force 00:08:13.870 11:46:07 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:13.870 11:46:07 -- common/autotest_common.sh@910 -- # force=-f 00:08:13.870 11:46:07 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:14.131 btrfs-progs v6.6.2 00:08:14.131 See https://btrfs.readthedocs.io for more information. 00:08:14.131 00:08:14.131 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:14.131 NOTE: several default settings have changed in version 5.15, please make sure 00:08:14.131 this does not affect your deployments: 00:08:14.131 - DUP for metadata (-m dup) 00:08:14.131 - enabled no-holes (-O no-holes) 00:08:14.131 - enabled free-space-tree (-R free-space-tree) 00:08:14.131 00:08:14.131 Label: (null) 00:08:14.131 UUID: a93f6a96-e9a8-40d4-a227-110920e460d7 00:08:14.131 Node size: 16384 00:08:14.131 Sector size: 4096 00:08:14.131 Filesystem size: 510.00MiB 00:08:14.131 Block group profiles: 00:08:14.131 Data: single 8.00MiB 00:08:14.131 Metadata: DUP 32.00MiB 00:08:14.131 System: DUP 8.00MiB 00:08:14.131 SSD detected: yes 00:08:14.131 Zoned device: no 00:08:14.131 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:14.131 Runtime features: free-space-tree 00:08:14.131 Checksum: crc32c 00:08:14.131 Number of devices: 1 00:08:14.131 Devices: 00:08:14.131 ID SIZE PATH 00:08:14.131 1 510.00MiB /dev/nvme0n1p1 00:08:14.131 00:08:14.131 11:46:07 -- common/autotest_common.sh@921 -- # return 0 00:08:14.131 11:46:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.552 11:46:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.552 11:46:08 -- target/filesystem.sh@25 -- # sync 00:08:15.552 11:46:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.552 11:46:08 -- target/filesystem.sh@27 -- # sync 00:08:15.552 11:46:08 -- target/filesystem.sh@29 -- # i=0 00:08:15.552 11:46:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.552 11:46:08 -- target/filesystem.sh@37 -- # kill -0 1767782 00:08:15.552 11:46:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.552 11:46:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.552 11:46:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.552 11:46:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.552 00:08:15.552 real 0m1.438s 00:08:15.552 user 0m0.024s 00:08:15.552 sys 0m0.069s 00:08:15.552 11:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.552 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:08:15.552 ************************************ 00:08:15.552 END TEST filesystem_in_capsule_btrfs 00:08:15.552 ************************************ 00:08:15.552 11:46:08 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:15.552 11:46:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:15.552 11:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.552 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:08:15.552 ************************************ 00:08:15.552 START TEST filesystem_in_capsule_xfs 00:08:15.552 ************************************ 00:08:15.552 11:46:08 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:15.552 11:46:08 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:15.552 11:46:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.552 11:46:08 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:15.552 11:46:08 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:15.552 11:46:08 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:15.552 11:46:08 -- common/autotest_common.sh@904 -- # local i=0 00:08:15.552 11:46:08 -- common/autotest_common.sh@905 -- # local force 00:08:15.552 11:46:08 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:15.552 11:46:08 -- common/autotest_common.sh@910 -- # force=-f 00:08:15.552 11:46:08 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:15.552 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:15.552 = sectsz=512 attr=2, projid32bit=1 00:08:15.552 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:15.552 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:15.552 data = bsize=4096 blocks=130560, imaxpct=25 00:08:15.552 = sunit=0 swidth=0 blks 00:08:15.552 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:15.552 log =internal log bsize=4096 blocks=16384, version=2 00:08:15.552 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:15.552 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:16.553 Discarding blocks...Done. 00:08:16.553 11:46:10 -- common/autotest_common.sh@921 -- # return 0 00:08:16.554 11:46:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.098 11:46:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.098 11:46:12 -- target/filesystem.sh@25 -- # sync 00:08:19.098 11:46:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.098 11:46:12 -- target/filesystem.sh@27 -- # sync 00:08:19.098 11:46:12 -- target/filesystem.sh@29 -- # i=0 00:08:19.098 11:46:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.098 11:46:12 -- target/filesystem.sh@37 -- # kill -0 1767782 00:08:19.098 11:46:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.098 11:46:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.098 11:46:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.098 11:46:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.098 00:08:19.098 real 0m3.434s 00:08:19.098 user 0m0.023s 00:08:19.098 sys 0m0.057s 00:08:19.098 11:46:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.098 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:08:19.098 ************************************ 00:08:19.098 END TEST filesystem_in_capsule_xfs 00:08:19.098 ************************************ 00:08:19.098 11:46:12 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:19.098 11:46:12 -- target/filesystem.sh@93 -- # sync 00:08:19.098 11:46:12 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.098 11:46:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.098 11:46:12 -- common/autotest_common.sh@1198 -- # local i=0 00:08:19.098 11:46:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:19.098 11:46:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.098 11:46:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:19.098 11:46:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.098 11:46:12 -- common/autotest_common.sh@1210 -- # return 0 00:08:19.098 11:46:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.098 11:46:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.098 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:08:19.098 11:46:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.098 11:46:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:19.098 11:46:12 -- target/filesystem.sh@101 -- # killprocess 1767782 00:08:19.098 11:46:12 -- common/autotest_common.sh@926 -- # '[' -z 1767782 ']' 00:08:19.098 11:46:12 -- common/autotest_common.sh@930 -- # kill -0 1767782 00:08:19.098 11:46:12 -- common/autotest_common.sh@931 -- # uname 00:08:19.098 11:46:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:19.098 11:46:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1767782 00:08:19.098 11:46:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:19.098 11:46:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:19.098 11:46:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1767782' 00:08:19.098 killing process with pid 1767782 00:08:19.098 11:46:12 -- common/autotest_common.sh@945 -- # kill 1767782 00:08:19.098 11:46:12 -- common/autotest_common.sh@950 -- # wait 1767782 00:08:19.359 11:46:12 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:19.359 00:08:19.359 real 0m13.428s 00:08:19.359 user 0m52.935s 00:08:19.359 sys 0m1.001s 00:08:19.359 11:46:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.359 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:08:19.359 ************************************ 00:08:19.359 END TEST nvmf_filesystem_in_capsule 00:08:19.359 ************************************ 00:08:19.359 11:46:12 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:19.359 11:46:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:19.359 11:46:12 -- nvmf/common.sh@116 -- # sync 00:08:19.359 11:46:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:19.359 11:46:12 -- nvmf/common.sh@119 -- # set +e 00:08:19.359 11:46:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:19.359 11:46:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:19.359 rmmod nvme_tcp 00:08:19.359 rmmod nvme_fabrics 00:08:19.359 rmmod nvme_keyring 00:08:19.359 11:46:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:19.359 11:46:12 -- nvmf/common.sh@123 -- # set -e 00:08:19.359 11:46:12 -- nvmf/common.sh@124 -- # return 0 00:08:19.359 11:46:12 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:19.359 11:46:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:19.359 11:46:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:19.359 11:46:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:19.360 11:46:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.360 11:46:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:19.360 11:46:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.360 11:46:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.360 11:46:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.915 11:46:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:21.915 00:08:21.915 real 0m36.092s 00:08:21.915 user 1m46.505s 00:08:21.915 sys 0m7.383s 00:08:21.915 11:46:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.915 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:08:21.915 ************************************ 00:08:21.915 END TEST nvmf_filesystem 00:08:21.915 ************************************ 00:08:21.915 11:46:15 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:21.915 11:46:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:21.915 11:46:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.915 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:08:21.915 ************************************ 00:08:21.915 START TEST nvmf_discovery 00:08:21.915 ************************************ 00:08:21.915 11:46:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:21.915 * Looking for test storage... 00:08:21.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.915 11:46:15 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.915 11:46:15 -- nvmf/common.sh@7 -- # uname -s 00:08:21.915 11:46:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.915 11:46:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.915 11:46:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.915 11:46:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.915 11:46:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.915 11:46:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.915 11:46:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.915 11:46:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.915 11:46:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.915 11:46:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.915 11:46:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:21.915 11:46:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:21.915 11:46:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.915 11:46:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.915 11:46:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.915 11:46:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.915 11:46:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.915 11:46:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.915 11:46:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.915 11:46:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.915 11:46:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.915 11:46:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.915 11:46:15 -- paths/export.sh@5 -- # export PATH 00:08:21.915 11:46:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.915 11:46:15 -- nvmf/common.sh@46 -- # : 0 00:08:21.915 11:46:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:21.915 11:46:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:21.915 11:46:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:21.915 11:46:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.915 11:46:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.915 11:46:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:21.915 11:46:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:21.915 11:46:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:21.915 11:46:15 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:21.915 11:46:15 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:21.915 11:46:15 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:21.915 11:46:15 -- target/discovery.sh@15 -- # hash nvme 00:08:21.915 11:46:15 -- target/discovery.sh@20 -- # nvmftestinit 00:08:21.915 11:46:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:21.915 11:46:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.915 11:46:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:21.915 11:46:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:21.915 11:46:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:21.915 11:46:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.915 11:46:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.915 11:46:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.915 11:46:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:21.915 11:46:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:21.915 11:46:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:21.915 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:08:28.506 11:46:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:28.506 11:46:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:28.506 11:46:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:28.506 11:46:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:28.506 11:46:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:28.506 11:46:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:28.506 11:46:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:28.506 11:46:21 -- nvmf/common.sh@294 -- # net_devs=() 00:08:28.506 11:46:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:28.506 11:46:21 -- nvmf/common.sh@295 -- # e810=() 00:08:28.506 11:46:21 -- nvmf/common.sh@295 -- # local -ga e810 00:08:28.506 11:46:21 -- nvmf/common.sh@296 -- # x722=() 00:08:28.506 11:46:21 -- nvmf/common.sh@296 -- # local -ga x722 00:08:28.506 11:46:21 -- nvmf/common.sh@297 -- # mlx=() 00:08:28.506 11:46:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:28.506 11:46:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.506 11:46:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:28.506 11:46:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:28.506 11:46:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:28.506 11:46:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:28.506 11:46:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:28.506 11:46:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:28.506 11:46:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:28.506 11:46:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:28.506 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:28.506 11:46:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:28.506 11:46:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:28.506 11:46:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.506 11:46:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.506 11:46:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:28.506 11:46:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:28.506 11:46:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:28.506 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:28.506 11:46:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:28.507 11:46:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:28.507 11:46:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.507 11:46:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.507 11:46:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:28.507 11:46:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:28.507 11:46:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:28.507 11:46:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:28.507 11:46:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:28.507 11:46:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.507 11:46:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:28.507 11:46:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.507 11:46:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:28.507 Found net devices under 0000:31:00.0: cvl_0_0 00:08:28.507 11:46:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.507 11:46:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:28.507 11:46:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.507 11:46:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:28.507 11:46:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.507 11:46:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:28.507 Found net devices under 0000:31:00.1: cvl_0_1 00:08:28.507 11:46:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.507 11:46:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:28.507 11:46:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:28.507 11:46:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:28.507 11:46:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:28.507 11:46:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:28.507 11:46:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.507 11:46:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.507 11:46:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.507 11:46:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:28.507 11:46:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.507 11:46:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.507 11:46:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:28.507 11:46:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.507 11:46:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.507 11:46:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:28.507 11:46:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:28.507 11:46:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.507 11:46:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.507 11:46:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.507 11:46:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.507 11:46:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:28.507 11:46:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.507 11:46:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.507 11:46:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.507 11:46:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:28.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:08:28.507 00:08:28.507 --- 10.0.0.2 ping statistics --- 00:08:28.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.507 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:08:28.507 11:46:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:08:28.507 00:08:28.507 --- 10.0.0.1 ping statistics --- 00:08:28.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.507 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:08:28.507 11:46:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.507 11:46:22 -- nvmf/common.sh@410 -- # return 0 00:08:28.507 11:46:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:28.507 11:46:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.507 11:46:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:28.507 11:46:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:28.507 11:46:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.507 11:46:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:28.507 11:46:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:28.507 11:46:22 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:28.507 11:46:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:28.507 11:46:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:28.507 11:46:22 -- common/autotest_common.sh@10 -- # set +x 00:08:28.769 11:46:22 -- nvmf/common.sh@469 -- # nvmfpid=1774799 00:08:28.769 11:46:22 -- nvmf/common.sh@470 -- # waitforlisten 1774799 00:08:28.769 11:46:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.769 11:46:22 -- common/autotest_common.sh@819 -- # '[' -z 1774799 ']' 00:08:28.769 11:46:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.769 11:46:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:28.769 11:46:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.769 11:46:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:28.769 11:46:22 -- common/autotest_common.sh@10 -- # set +x 00:08:28.769 [2024-06-10 11:46:22.330420] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:28.769 [2024-06-10 11:46:22.330483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.769 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.769 [2024-06-10 11:46:22.402319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.769 [2024-06-10 11:46:22.474980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:28.769 [2024-06-10 11:46:22.475114] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.769 [2024-06-10 11:46:22.475124] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.769 [2024-06-10 11:46:22.475132] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.769 [2024-06-10 11:46:22.475287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.769 [2024-06-10 11:46:22.475376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.769 [2024-06-10 11:46:22.475533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.769 [2024-06-10 11:46:22.475534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.341 11:46:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:29.341 11:46:23 -- common/autotest_common.sh@852 -- # return 0 00:08:29.341 11:46:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:29.341 11:46:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:29.341 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.603 11:46:23 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 [2024-06-10 11:46:23.150428] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@26 -- # seq 1 4 00:08:29.603 11:46:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:29.603 11:46:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 Null1 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 [2024-06-10 11:46:23.206716] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:29.603 11:46:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 Null2 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:29.603 11:46:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 Null3 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:29.603 11:46:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 Null4 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:29.603 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.603 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.603 11:46:23 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:29.865 00:08:29.865 Discovery Log Number of Records 6, Generation counter 6 00:08:29.865 =====Discovery Log Entry 0====== 00:08:29.865 trtype: tcp 00:08:29.865 adrfam: ipv4 00:08:29.865 subtype: current discovery subsystem 00:08:29.865 treq: not required 00:08:29.865 portid: 0 00:08:29.865 trsvcid: 4420 00:08:29.865 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:29.865 traddr: 10.0.0.2 00:08:29.865 eflags: explicit discovery connections, duplicate discovery information 00:08:29.865 sectype: none 00:08:29.865 =====Discovery Log Entry 1====== 00:08:29.865 trtype: tcp 00:08:29.865 adrfam: ipv4 00:08:29.865 subtype: nvme subsystem 00:08:29.865 treq: not required 00:08:29.865 portid: 0 00:08:29.865 trsvcid: 4420 00:08:29.865 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:29.865 traddr: 10.0.0.2 00:08:29.865 eflags: none 00:08:29.865 sectype: none 00:08:29.865 =====Discovery Log Entry 2====== 00:08:29.865 trtype: tcp 00:08:29.865 adrfam: ipv4 00:08:29.865 subtype: nvme subsystem 00:08:29.865 treq: not required 00:08:29.865 portid: 0 00:08:29.865 trsvcid: 4420 00:08:29.865 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:29.865 traddr: 10.0.0.2 00:08:29.865 eflags: none 00:08:29.865 sectype: none 00:08:29.865 =====Discovery Log Entry 3====== 00:08:29.865 trtype: tcp 00:08:29.865 adrfam: ipv4 00:08:29.865 subtype: nvme subsystem 00:08:29.865 treq: not required 00:08:29.865 portid: 0 00:08:29.865 trsvcid: 4420 00:08:29.865 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:29.865 traddr: 10.0.0.2 00:08:29.865 eflags: none 00:08:29.865 sectype: none 00:08:29.865 =====Discovery Log Entry 4====== 00:08:29.865 trtype: tcp 00:08:29.865 adrfam: ipv4 00:08:29.865 subtype: nvme subsystem 00:08:29.865 treq: not required 00:08:29.865 portid: 0 00:08:29.865 trsvcid: 4420 00:08:29.865 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:29.865 traddr: 10.0.0.2 00:08:29.865 eflags: none 00:08:29.865 sectype: none 00:08:29.865 =====Discovery Log Entry 5====== 00:08:29.865 trtype: tcp 00:08:29.865 adrfam: ipv4 00:08:29.865 subtype: discovery subsystem referral 00:08:29.865 treq: not required 00:08:29.865 portid: 0 00:08:29.865 trsvcid: 4430 00:08:29.865 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:29.865 traddr: 10.0.0.2 00:08:29.865 eflags: none 00:08:29.865 sectype: none 00:08:29.865 11:46:23 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:29.865 Perform nvmf subsystem discovery via RPC 00:08:29.865 11:46:23 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:29.865 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.865 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.865 [2024-06-10 11:46:23.571831] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:29.865 [ 00:08:29.865 { 00:08:29.865 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:29.865 "subtype": "Discovery", 00:08:29.865 "listen_addresses": [ 00:08:29.865 { 00:08:29.865 "transport": "TCP", 00:08:29.865 "trtype": "TCP", 00:08:29.865 "adrfam": "IPv4", 00:08:29.865 "traddr": "10.0.0.2", 00:08:29.865 "trsvcid": "4420" 00:08:29.865 } 00:08:29.865 ], 00:08:29.865 "allow_any_host": true, 00:08:29.865 "hosts": [] 00:08:29.865 }, 00:08:29.865 { 00:08:29.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.865 "subtype": "NVMe", 00:08:29.865 "listen_addresses": [ 00:08:29.865 { 00:08:29.865 "transport": "TCP", 00:08:29.865 "trtype": "TCP", 00:08:29.865 "adrfam": "IPv4", 00:08:29.865 "traddr": "10.0.0.2", 00:08:29.865 "trsvcid": "4420" 00:08:29.865 } 00:08:29.865 ], 00:08:29.865 "allow_any_host": true, 00:08:29.865 "hosts": [], 00:08:29.865 "serial_number": "SPDK00000000000001", 00:08:29.865 "model_number": "SPDK bdev Controller", 00:08:29.865 "max_namespaces": 32, 00:08:29.865 "min_cntlid": 1, 00:08:29.865 "max_cntlid": 65519, 00:08:29.865 "namespaces": [ 00:08:29.865 { 00:08:29.865 "nsid": 1, 00:08:29.865 "bdev_name": "Null1", 00:08:29.865 "name": "Null1", 00:08:29.865 "nguid": "27C328B05FAB4F3C9DEC5B5C9E3C71C1", 00:08:29.865 "uuid": "27c328b0-5fab-4f3c-9dec-5b5c9e3c71c1" 00:08:29.865 } 00:08:29.865 ] 00:08:29.865 }, 00:08:29.865 { 00:08:29.865 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:29.865 "subtype": "NVMe", 00:08:29.865 "listen_addresses": [ 00:08:29.865 { 00:08:29.865 "transport": "TCP", 00:08:29.865 "trtype": "TCP", 00:08:29.865 "adrfam": "IPv4", 00:08:29.865 "traddr": "10.0.0.2", 00:08:29.865 "trsvcid": "4420" 00:08:29.865 } 00:08:29.865 ], 00:08:29.865 "allow_any_host": true, 00:08:29.865 "hosts": [], 00:08:29.865 "serial_number": "SPDK00000000000002", 00:08:29.865 "model_number": "SPDK bdev Controller", 00:08:29.865 "max_namespaces": 32, 00:08:29.865 "min_cntlid": 1, 00:08:29.865 "max_cntlid": 65519, 00:08:29.865 "namespaces": [ 00:08:29.865 { 00:08:29.865 "nsid": 1, 00:08:29.865 "bdev_name": "Null2", 00:08:29.865 "name": "Null2", 00:08:29.865 "nguid": "5383FDA9868841F3AD8484BECA100538", 00:08:29.865 "uuid": "5383fda9-8688-41f3-ad84-84beca100538" 00:08:29.865 } 00:08:29.865 ] 00:08:29.865 }, 00:08:29.865 { 00:08:29.865 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:29.865 "subtype": "NVMe", 00:08:29.865 "listen_addresses": [ 00:08:29.865 { 00:08:29.865 "transport": "TCP", 00:08:29.865 "trtype": "TCP", 00:08:29.865 "adrfam": "IPv4", 00:08:29.865 "traddr": "10.0.0.2", 00:08:29.865 "trsvcid": "4420" 00:08:29.865 } 00:08:29.865 ], 00:08:29.865 "allow_any_host": true, 00:08:29.865 "hosts": [], 00:08:29.865 "serial_number": "SPDK00000000000003", 00:08:29.865 "model_number": "SPDK bdev Controller", 00:08:29.865 "max_namespaces": 32, 00:08:29.865 "min_cntlid": 1, 00:08:29.865 "max_cntlid": 65519, 00:08:29.865 "namespaces": [ 00:08:29.865 { 00:08:29.865 "nsid": 1, 00:08:29.865 "bdev_name": "Null3", 00:08:29.865 "name": "Null3", 00:08:29.865 "nguid": "6FBE25E3AF884383A64BD75C0A1C7424", 00:08:29.865 "uuid": "6fbe25e3-af88-4383-a64b-d75c0a1c7424" 00:08:29.865 } 00:08:29.865 ] 00:08:29.865 }, 00:08:29.865 { 00:08:29.865 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:29.865 "subtype": "NVMe", 00:08:29.865 "listen_addresses": [ 00:08:29.865 { 00:08:29.866 "transport": "TCP", 00:08:29.866 "trtype": "TCP", 00:08:29.866 "adrfam": "IPv4", 00:08:29.866 "traddr": "10.0.0.2", 00:08:29.866 "trsvcid": "4420" 00:08:29.866 } 00:08:29.866 ], 00:08:29.866 "allow_any_host": true, 00:08:29.866 "hosts": [], 00:08:29.866 "serial_number": "SPDK00000000000004", 00:08:29.866 "model_number": "SPDK bdev Controller", 00:08:29.866 "max_namespaces": 32, 00:08:29.866 "min_cntlid": 1, 00:08:29.866 "max_cntlid": 65519, 00:08:29.866 "namespaces": [ 00:08:29.866 { 00:08:29.866 "nsid": 1, 00:08:29.866 "bdev_name": "Null4", 00:08:29.866 "name": "Null4", 00:08:29.866 "nguid": "3D9EE2A785C54FE188FB73E0865C6025", 00:08:29.866 "uuid": "3d9ee2a7-85c5-4fe1-88fb-73e0865c6025" 00:08:29.866 } 00:08:29.866 ] 00:08:29.866 } 00:08:29.866 ] 00:08:29.866 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.866 11:46:23 -- target/discovery.sh@42 -- # seq 1 4 00:08:29.866 11:46:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:29.866 11:46:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.866 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.866 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.866 11:46:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:29.866 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.866 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.866 11:46:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:29.866 11:46:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:29.866 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.866 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.866 11:46:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:29.866 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.866 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.866 11:46:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:29.866 11:46:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:29.866 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.866 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.127 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.127 11:46:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:30.127 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.127 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.127 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.127 11:46:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:30.127 11:46:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:30.127 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.127 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.127 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.127 11:46:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:30.127 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.127 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.127 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.127 11:46:23 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:30.127 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.127 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.127 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.127 11:46:23 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:30.127 11:46:23 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:30.127 11:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.127 11:46:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.127 11:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.127 11:46:23 -- target/discovery.sh@49 -- # check_bdevs= 00:08:30.127 11:46:23 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:30.127 11:46:23 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:30.128 11:46:23 -- target/discovery.sh@57 -- # nvmftestfini 00:08:30.128 11:46:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:30.128 11:46:23 -- nvmf/common.sh@116 -- # sync 00:08:30.128 11:46:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:30.128 11:46:23 -- nvmf/common.sh@119 -- # set +e 00:08:30.128 11:46:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:30.128 11:46:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:30.128 rmmod nvme_tcp 00:08:30.128 rmmod nvme_fabrics 00:08:30.128 rmmod nvme_keyring 00:08:30.128 11:46:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:30.128 11:46:23 -- nvmf/common.sh@123 -- # set -e 00:08:30.128 11:46:23 -- nvmf/common.sh@124 -- # return 0 00:08:30.128 11:46:23 -- nvmf/common.sh@477 -- # '[' -n 1774799 ']' 00:08:30.128 11:46:23 -- nvmf/common.sh@478 -- # killprocess 1774799 00:08:30.128 11:46:23 -- common/autotest_common.sh@926 -- # '[' -z 1774799 ']' 00:08:30.128 11:46:23 -- common/autotest_common.sh@930 -- # kill -0 1774799 00:08:30.128 11:46:23 -- common/autotest_common.sh@931 -- # uname 00:08:30.128 11:46:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:30.128 11:46:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1774799 00:08:30.128 11:46:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:30.128 11:46:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:30.128 11:46:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1774799' 00:08:30.128 killing process with pid 1774799 00:08:30.128 11:46:23 -- common/autotest_common.sh@945 -- # kill 1774799 00:08:30.128 [2024-06-10 11:46:23.856169] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:30.128 11:46:23 -- common/autotest_common.sh@950 -- # wait 1774799 00:08:30.389 11:46:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:30.389 11:46:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:30.389 11:46:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:30.389 11:46:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.389 11:46:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:30.389 11:46:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.389 11:46:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.389 11:46:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.304 11:46:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:32.304 00:08:32.304 real 0m10.954s 00:08:32.304 user 0m8.183s 00:08:32.304 sys 0m5.533s 00:08:32.304 11:46:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.304 11:46:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.304 ************************************ 00:08:32.304 END TEST nvmf_discovery 00:08:32.304 ************************************ 00:08:32.566 11:46:26 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:32.566 11:46:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:32.566 11:46:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.566 11:46:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.566 ************************************ 00:08:32.566 START TEST nvmf_referrals 00:08:32.566 ************************************ 00:08:32.566 11:46:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:32.566 * Looking for test storage... 00:08:32.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.566 11:46:26 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.566 11:46:26 -- nvmf/common.sh@7 -- # uname -s 00:08:32.566 11:46:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.566 11:46:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.566 11:46:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.566 11:46:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.566 11:46:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.566 11:46:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.566 11:46:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.566 11:46:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.566 11:46:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.566 11:46:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.566 11:46:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:32.566 11:46:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:32.566 11:46:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.566 11:46:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.566 11:46:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.566 11:46:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.566 11:46:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.566 11:46:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.566 11:46:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.566 11:46:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.566 11:46:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.567 11:46:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.567 11:46:26 -- paths/export.sh@5 -- # export PATH 00:08:32.567 11:46:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.567 11:46:26 -- nvmf/common.sh@46 -- # : 0 00:08:32.567 11:46:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:32.567 11:46:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:32.567 11:46:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:32.567 11:46:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.567 11:46:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.567 11:46:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:32.567 11:46:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:32.567 11:46:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:32.567 11:46:26 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:32.567 11:46:26 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:32.567 11:46:26 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:32.567 11:46:26 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:32.567 11:46:26 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:32.567 11:46:26 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:32.567 11:46:26 -- target/referrals.sh@37 -- # nvmftestinit 00:08:32.567 11:46:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:32.567 11:46:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.567 11:46:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:32.567 11:46:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:32.567 11:46:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:32.567 11:46:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.567 11:46:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.567 11:46:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.567 11:46:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:32.567 11:46:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:32.567 11:46:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:32.567 11:46:26 -- common/autotest_common.sh@10 -- # set +x 00:08:40.714 11:46:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:40.714 11:46:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:40.714 11:46:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:40.714 11:46:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:40.714 11:46:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:40.714 11:46:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:40.714 11:46:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:40.714 11:46:33 -- nvmf/common.sh@294 -- # net_devs=() 00:08:40.714 11:46:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:40.714 11:46:33 -- nvmf/common.sh@295 -- # e810=() 00:08:40.714 11:46:33 -- nvmf/common.sh@295 -- # local -ga e810 00:08:40.714 11:46:33 -- nvmf/common.sh@296 -- # x722=() 00:08:40.714 11:46:33 -- nvmf/common.sh@296 -- # local -ga x722 00:08:40.714 11:46:33 -- nvmf/common.sh@297 -- # mlx=() 00:08:40.714 11:46:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:40.714 11:46:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.714 11:46:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:40.714 11:46:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:40.714 11:46:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:40.714 11:46:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:40.714 11:46:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:40.714 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:40.714 11:46:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:40.714 11:46:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:40.714 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:40.714 11:46:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:40.714 11:46:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:40.714 11:46:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.714 11:46:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:40.714 11:46:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.714 11:46:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:40.714 Found net devices under 0000:31:00.0: cvl_0_0 00:08:40.714 11:46:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.714 11:46:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:40.714 11:46:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.714 11:46:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:40.714 11:46:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.714 11:46:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:40.714 Found net devices under 0000:31:00.1: cvl_0_1 00:08:40.714 11:46:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.714 11:46:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:40.714 11:46:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:40.714 11:46:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:40.714 11:46:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.714 11:46:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.714 11:46:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.714 11:46:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:40.714 11:46:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.714 11:46:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.714 11:46:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:40.714 11:46:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.714 11:46:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.714 11:46:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:40.714 11:46:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:40.714 11:46:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.714 11:46:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.714 11:46:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.714 11:46:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.714 11:46:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:40.714 11:46:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.714 11:46:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.714 11:46:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.714 11:46:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:40.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:08:40.714 00:08:40.714 --- 10.0.0.2 ping statistics --- 00:08:40.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.714 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:08:40.714 11:46:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:08:40.714 00:08:40.714 --- 10.0.0.1 ping statistics --- 00:08:40.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.714 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:08:40.714 11:46:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.714 11:46:33 -- nvmf/common.sh@410 -- # return 0 00:08:40.714 11:46:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:40.714 11:46:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.714 11:46:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:40.714 11:46:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.714 11:46:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:40.714 11:46:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:40.714 11:46:33 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:40.714 11:46:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:40.715 11:46:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:40.715 11:46:33 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 11:46:33 -- nvmf/common.sh@469 -- # nvmfpid=1779561 00:08:40.715 11:46:33 -- nvmf/common.sh@470 -- # waitforlisten 1779561 00:08:40.715 11:46:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.715 11:46:33 -- common/autotest_common.sh@819 -- # '[' -z 1779561 ']' 00:08:40.715 11:46:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.715 11:46:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:40.715 11:46:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.715 11:46:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:40.715 11:46:33 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 [2024-06-10 11:46:33.526009] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:40.715 [2024-06-10 11:46:33.526058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.715 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.715 [2024-06-10 11:46:33.592142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.715 [2024-06-10 11:46:33.655525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:40.715 [2024-06-10 11:46:33.655660] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.715 [2024-06-10 11:46:33.655670] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.715 [2024-06-10 11:46:33.655679] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.715 [2024-06-10 11:46:33.655816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.715 [2024-06-10 11:46:33.655918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.715 [2024-06-10 11:46:33.656066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.715 [2024-06-10 11:46:33.656067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.715 11:46:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:40.715 11:46:34 -- common/autotest_common.sh@852 -- # return 0 00:08:40.715 11:46:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:40.715 11:46:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:40.715 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 11:46:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.715 11:46:34 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.715 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.715 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 [2024-06-10 11:46:34.339415] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.715 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.715 11:46:34 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:40.715 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.715 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 [2024-06-10 11:46:34.355608] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:40.715 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.715 11:46:34 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:40.715 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.715 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.715 11:46:34 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:40.715 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.715 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.715 11:46:34 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:40.715 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.715 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.715 11:46:34 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.715 11:46:34 -- target/referrals.sh@48 -- # jq length 00:08:40.715 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.715 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.715 11:46:34 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:40.715 11:46:34 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:40.715 11:46:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.715 11:46:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.715 11:46:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.715 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.715 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 11:46:34 -- target/referrals.sh@21 -- # sort 00:08:40.715 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.976 11:46:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:40.976 11:46:34 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:40.976 11:46:34 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:40.976 11:46:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.976 11:46:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.976 11:46:34 -- target/referrals.sh@26 -- # sort 00:08:40.976 11:46:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.976 11:46:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.976 11:46:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:40.976 11:46:34 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:40.976 11:46:34 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:40.976 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.976 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.976 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.976 11:46:34 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:40.976 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.976 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.976 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.976 11:46:34 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:40.976 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.976 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.976 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.976 11:46:34 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.976 11:46:34 -- target/referrals.sh@56 -- # jq length 00:08:40.976 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.976 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:40.976 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.237 11:46:34 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:41.237 11:46:34 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:41.237 11:46:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.237 11:46:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.237 11:46:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.237 11:46:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.237 11:46:34 -- target/referrals.sh@26 -- # sort 00:08:41.237 11:46:34 -- target/referrals.sh@26 -- # echo 00:08:41.237 11:46:34 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:41.237 11:46:34 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:41.237 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.237 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:41.237 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.237 11:46:34 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:41.237 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.237 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:41.237 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.237 11:46:34 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:41.237 11:46:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:41.237 11:46:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.237 11:46:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:41.237 11:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.237 11:46:34 -- target/referrals.sh@21 -- # sort 00:08:41.237 11:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:41.237 11:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.237 11:46:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:41.237 11:46:34 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:41.237 11:46:34 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:41.237 11:46:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.237 11:46:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.237 11:46:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.237 11:46:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.237 11:46:34 -- target/referrals.sh@26 -- # sort 00:08:41.498 11:46:35 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:41.498 11:46:35 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:41.498 11:46:35 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:41.498 11:46:35 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:41.498 11:46:35 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:41.498 11:46:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.498 11:46:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:41.759 11:46:35 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:41.759 11:46:35 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:41.759 11:46:35 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:41.759 11:46:35 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:41.759 11:46:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.759 11:46:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:41.759 11:46:35 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:41.759 11:46:35 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:41.759 11:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.759 11:46:35 -- common/autotest_common.sh@10 -- # set +x 00:08:41.759 11:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.759 11:46:35 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:41.759 11:46:35 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:41.759 11:46:35 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.759 11:46:35 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:41.759 11:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.759 11:46:35 -- target/referrals.sh@21 -- # sort 00:08:41.759 11:46:35 -- common/autotest_common.sh@10 -- # set +x 00:08:41.759 11:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.759 11:46:35 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:41.759 11:46:35 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:41.759 11:46:35 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:41.759 11:46:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.759 11:46:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.759 11:46:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.759 11:46:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.759 11:46:35 -- target/referrals.sh@26 -- # sort 00:08:41.759 11:46:35 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:41.759 11:46:35 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:41.759 11:46:35 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:41.759 11:46:35 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:41.759 11:46:35 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:42.020 11:46:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:42.020 11:46:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:42.020 11:46:35 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:42.020 11:46:35 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:42.020 11:46:35 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:42.020 11:46:35 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:42.020 11:46:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:42.020 11:46:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:42.020 11:46:35 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:42.020 11:46:35 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:42.020 11:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:42.020 11:46:35 -- common/autotest_common.sh@10 -- # set +x 00:08:42.020 11:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:42.020 11:46:35 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.020 11:46:35 -- target/referrals.sh@82 -- # jq length 00:08:42.020 11:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:42.020 11:46:35 -- common/autotest_common.sh@10 -- # set +x 00:08:42.020 11:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:42.280 11:46:35 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:42.280 11:46:35 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:42.280 11:46:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:42.280 11:46:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:42.280 11:46:35 -- target/referrals.sh@26 -- # sort 00:08:42.280 11:46:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:42.280 11:46:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:42.280 11:46:35 -- target/referrals.sh@26 -- # echo 00:08:42.280 11:46:35 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:42.280 11:46:35 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:42.280 11:46:35 -- target/referrals.sh@86 -- # nvmftestfini 00:08:42.280 11:46:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:42.280 11:46:35 -- nvmf/common.sh@116 -- # sync 00:08:42.280 11:46:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:42.280 11:46:35 -- nvmf/common.sh@119 -- # set +e 00:08:42.280 11:46:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:42.280 11:46:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:42.280 rmmod nvme_tcp 00:08:42.280 rmmod nvme_fabrics 00:08:42.280 rmmod nvme_keyring 00:08:42.280 11:46:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:42.280 11:46:35 -- nvmf/common.sh@123 -- # set -e 00:08:42.280 11:46:35 -- nvmf/common.sh@124 -- # return 0 00:08:42.280 11:46:35 -- nvmf/common.sh@477 -- # '[' -n 1779561 ']' 00:08:42.280 11:46:35 -- nvmf/common.sh@478 -- # killprocess 1779561 00:08:42.280 11:46:35 -- common/autotest_common.sh@926 -- # '[' -z 1779561 ']' 00:08:42.280 11:46:35 -- common/autotest_common.sh@930 -- # kill -0 1779561 00:08:42.280 11:46:35 -- common/autotest_common.sh@931 -- # uname 00:08:42.280 11:46:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:42.280 11:46:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1779561 00:08:42.280 11:46:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:42.280 11:46:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:42.280 11:46:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1779561' 00:08:42.280 killing process with pid 1779561 00:08:42.280 11:46:36 -- common/autotest_common.sh@945 -- # kill 1779561 00:08:42.280 11:46:36 -- common/autotest_common.sh@950 -- # wait 1779561 00:08:42.541 11:46:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:42.541 11:46:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:42.541 11:46:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:42.541 11:46:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.541 11:46:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:42.541 11:46:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.541 11:46:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.541 11:46:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.087 11:46:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:45.088 00:08:45.088 real 0m12.129s 00:08:45.088 user 0m12.996s 00:08:45.088 sys 0m5.791s 00:08:45.088 11:46:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.088 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:08:45.088 ************************************ 00:08:45.088 END TEST nvmf_referrals 00:08:45.088 ************************************ 00:08:45.088 11:46:38 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:45.088 11:46:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:45.088 11:46:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.088 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:08:45.088 ************************************ 00:08:45.088 START TEST nvmf_connect_disconnect 00:08:45.088 ************************************ 00:08:45.088 11:46:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:45.088 * Looking for test storage... 00:08:45.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.088 11:46:38 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.088 11:46:38 -- nvmf/common.sh@7 -- # uname -s 00:08:45.088 11:46:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.088 11:46:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.088 11:46:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.088 11:46:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.088 11:46:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.088 11:46:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.088 11:46:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.088 11:46:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.088 11:46:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.088 11:46:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.088 11:46:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:45.088 11:46:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:45.088 11:46:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.088 11:46:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.088 11:46:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.088 11:46:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.088 11:46:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.088 11:46:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.088 11:46:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.088 11:46:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.088 11:46:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.088 11:46:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.088 11:46:38 -- paths/export.sh@5 -- # export PATH 00:08:45.088 11:46:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.088 11:46:38 -- nvmf/common.sh@46 -- # : 0 00:08:45.088 11:46:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:45.088 11:46:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:45.088 11:46:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:45.088 11:46:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.088 11:46:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.088 11:46:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:45.088 11:46:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:45.088 11:46:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:45.088 11:46:38 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.088 11:46:38 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.088 11:46:38 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:45.088 11:46:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:45.088 11:46:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.088 11:46:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:45.088 11:46:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:45.088 11:46:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:45.088 11:46:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.088 11:46:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.088 11:46:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.088 11:46:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:45.088 11:46:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:45.088 11:46:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:45.088 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:08:51.681 11:46:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:51.681 11:46:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:51.681 11:46:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:51.681 11:46:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:51.681 11:46:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:51.681 11:46:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:51.681 11:46:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:51.681 11:46:45 -- nvmf/common.sh@294 -- # net_devs=() 00:08:51.681 11:46:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:51.681 11:46:45 -- nvmf/common.sh@295 -- # e810=() 00:08:51.681 11:46:45 -- nvmf/common.sh@295 -- # local -ga e810 00:08:51.681 11:46:45 -- nvmf/common.sh@296 -- # x722=() 00:08:51.681 11:46:45 -- nvmf/common.sh@296 -- # local -ga x722 00:08:51.681 11:46:45 -- nvmf/common.sh@297 -- # mlx=() 00:08:51.681 11:46:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:51.681 11:46:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.681 11:46:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:51.681 11:46:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:51.681 11:46:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:51.681 11:46:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:51.681 11:46:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:51.681 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:51.681 11:46:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:51.681 11:46:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:51.681 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:51.681 11:46:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:51.681 11:46:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:51.681 11:46:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:51.681 11:46:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.681 11:46:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:51.681 11:46:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.681 11:46:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:51.681 Found net devices under 0000:31:00.0: cvl_0_0 00:08:51.681 11:46:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.681 11:46:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:51.681 11:46:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.681 11:46:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:51.681 11:46:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.681 11:46:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:51.681 Found net devices under 0000:31:00.1: cvl_0_1 00:08:51.681 11:46:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.681 11:46:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:51.681 11:46:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:51.681 11:46:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:51.682 11:46:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:51.682 11:46:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:51.682 11:46:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.682 11:46:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.682 11:46:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.682 11:46:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:51.682 11:46:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.682 11:46:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.682 11:46:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:51.682 11:46:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.682 11:46:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.682 11:46:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:51.682 11:46:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:51.682 11:46:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.682 11:46:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.682 11:46:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.682 11:46:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.682 11:46:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:51.682 11:46:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.943 11:46:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.943 11:46:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.943 11:46:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:51.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:08:51.943 00:08:51.943 --- 10.0.0.2 ping statistics --- 00:08:51.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.943 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:08:51.943 11:46:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:08:51.943 00:08:51.943 --- 10.0.0.1 ping statistics --- 00:08:51.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.943 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:08:51.943 11:46:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.943 11:46:45 -- nvmf/common.sh@410 -- # return 0 00:08:51.943 11:46:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:51.943 11:46:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.943 11:46:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:51.943 11:46:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:51.943 11:46:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.943 11:46:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:51.943 11:46:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:51.943 11:46:45 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:51.943 11:46:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:51.943 11:46:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:51.943 11:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:51.943 11:46:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.943 11:46:45 -- nvmf/common.sh@469 -- # nvmfpid=1784370 00:08:51.943 11:46:45 -- nvmf/common.sh@470 -- # waitforlisten 1784370 00:08:51.943 11:46:45 -- common/autotest_common.sh@819 -- # '[' -z 1784370 ']' 00:08:51.943 11:46:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.943 11:46:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:51.943 11:46:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.943 11:46:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:51.943 11:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:51.943 [2024-06-10 11:46:45.650092] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:51.943 [2024-06-10 11:46:45.650156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.943 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.204 [2024-06-10 11:46:45.720864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.204 [2024-06-10 11:46:45.793879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.204 [2024-06-10 11:46:45.794019] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.204 [2024-06-10 11:46:45.794029] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.204 [2024-06-10 11:46:45.794038] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.204 [2024-06-10 11:46:45.794188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.204 [2024-06-10 11:46:45.794298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.204 [2024-06-10 11:46:45.794397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.204 [2024-06-10 11:46:45.794397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.776 11:46:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:52.776 11:46:46 -- common/autotest_common.sh@852 -- # return 0 00:08:52.776 11:46:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:52.776 11:46:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:52.776 11:46:46 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 11:46:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:52.776 11:46:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 11:46:46 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 [2024-06-10 11:46:46.470467] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.776 11:46:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:52.776 11:46:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 11:46:46 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 11:46:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.776 11:46:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 11:46:46 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 11:46:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.776 11:46:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 11:46:46 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 11:46:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.776 11:46:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 11:46:46 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 [2024-06-10 11:46:46.529817] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.776 11:46:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:52.776 11:46:46 -- target/connect_disconnect.sh@34 -- # set +x 00:08:55.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.490 11:50:35 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:42.490 11:50:35 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:42.490 11:50:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:42.490 11:50:35 -- nvmf/common.sh@116 -- # sync 00:12:42.490 11:50:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:42.490 11:50:35 -- nvmf/common.sh@119 -- # set +e 00:12:42.490 11:50:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:42.490 11:50:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:42.490 rmmod nvme_tcp 00:12:42.490 rmmod nvme_fabrics 00:12:42.490 rmmod nvme_keyring 00:12:42.490 11:50:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:42.490 11:50:36 -- nvmf/common.sh@123 -- # set -e 00:12:42.490 11:50:36 -- nvmf/common.sh@124 -- # return 0 00:12:42.490 11:50:36 -- nvmf/common.sh@477 -- # '[' -n 1784370 ']' 00:12:42.490 11:50:36 -- nvmf/common.sh@478 -- # killprocess 1784370 00:12:42.490 11:50:36 -- common/autotest_common.sh@926 -- # '[' -z 1784370 ']' 00:12:42.490 11:50:36 -- common/autotest_common.sh@930 -- # kill -0 1784370 00:12:42.490 11:50:36 -- common/autotest_common.sh@931 -- # uname 00:12:42.490 11:50:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:42.490 11:50:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1784370 00:12:42.490 11:50:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:42.490 11:50:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:42.490 11:50:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1784370' 00:12:42.490 killing process with pid 1784370 00:12:42.490 11:50:36 -- common/autotest_common.sh@945 -- # kill 1784370 00:12:42.490 11:50:36 -- common/autotest_common.sh@950 -- # wait 1784370 00:12:42.490 11:50:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:42.490 11:50:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:42.490 11:50:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:42.490 11:50:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.490 11:50:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:42.490 11:50:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.490 11:50:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.490 11:50:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.105 11:50:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:45.105 00:12:45.105 real 4m0.010s 00:12:45.105 user 15m16.159s 00:12:45.105 sys 0m18.860s 00:12:45.105 11:50:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.105 11:50:38 -- common/autotest_common.sh@10 -- # set +x 00:12:45.105 ************************************ 00:12:45.105 END TEST nvmf_connect_disconnect 00:12:45.105 ************************************ 00:12:45.105 11:50:38 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.105 11:50:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:45.105 11:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.105 11:50:38 -- common/autotest_common.sh@10 -- # set +x 00:12:45.105 ************************************ 00:12:45.105 START TEST nvmf_multitarget 00:12:45.105 ************************************ 00:12:45.105 11:50:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.105 * Looking for test storage... 00:12:45.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.105 11:50:38 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.105 11:50:38 -- nvmf/common.sh@7 -- # uname -s 00:12:45.105 11:50:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.105 11:50:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.105 11:50:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.105 11:50:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.105 11:50:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.105 11:50:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.105 11:50:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.105 11:50:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.105 11:50:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.105 11:50:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.106 11:50:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:45.106 11:50:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:45.106 11:50:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.106 11:50:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.106 11:50:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.106 11:50:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.106 11:50:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.106 11:50:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.106 11:50:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.106 11:50:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.106 11:50:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.106 11:50:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.106 11:50:38 -- paths/export.sh@5 -- # export PATH 00:12:45.106 11:50:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.106 11:50:38 -- nvmf/common.sh@46 -- # : 0 00:12:45.106 11:50:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:45.106 11:50:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:45.106 11:50:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:45.106 11:50:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.106 11:50:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.106 11:50:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:45.106 11:50:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:45.106 11:50:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:45.106 11:50:38 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:45.106 11:50:38 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:45.106 11:50:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:45.106 11:50:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.106 11:50:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:45.106 11:50:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:45.106 11:50:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:45.106 11:50:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.106 11:50:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.106 11:50:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.106 11:50:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:45.106 11:50:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:45.106 11:50:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:45.106 11:50:38 -- common/autotest_common.sh@10 -- # set +x 00:12:51.696 11:50:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:51.696 11:50:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:51.696 11:50:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:51.696 11:50:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:51.696 11:50:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:51.696 11:50:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:51.696 11:50:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:51.696 11:50:45 -- nvmf/common.sh@294 -- # net_devs=() 00:12:51.696 11:50:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:51.696 11:50:45 -- nvmf/common.sh@295 -- # e810=() 00:12:51.696 11:50:45 -- nvmf/common.sh@295 -- # local -ga e810 00:12:51.696 11:50:45 -- nvmf/common.sh@296 -- # x722=() 00:12:51.696 11:50:45 -- nvmf/common.sh@296 -- # local -ga x722 00:12:51.696 11:50:45 -- nvmf/common.sh@297 -- # mlx=() 00:12:51.696 11:50:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:51.696 11:50:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.696 11:50:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:51.696 11:50:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:51.696 11:50:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:51.696 11:50:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.696 11:50:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:51.696 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:51.696 11:50:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.696 11:50:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:51.696 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:51.696 11:50:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:51.696 11:50:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:51.696 11:50:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.696 11:50:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.697 11:50:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.697 11:50:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.697 11:50:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:51.697 Found net devices under 0000:31:00.0: cvl_0_0 00:12:51.697 11:50:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.697 11:50:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.697 11:50:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.697 11:50:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.697 11:50:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.697 11:50:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:51.697 Found net devices under 0000:31:00.1: cvl_0_1 00:12:51.697 11:50:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.697 11:50:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:51.697 11:50:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:51.697 11:50:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:51.697 11:50:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:51.697 11:50:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:51.697 11:50:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.697 11:50:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.697 11:50:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.697 11:50:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:51.697 11:50:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.697 11:50:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.697 11:50:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:51.697 11:50:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.697 11:50:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.697 11:50:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:51.697 11:50:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:51.697 11:50:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.697 11:50:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.958 11:50:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.958 11:50:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.958 11:50:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:51.958 11:50:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.958 11:50:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.958 11:50:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.958 11:50:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:51.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:12:51.958 00:12:51.958 --- 10.0.0.2 ping statistics --- 00:12:51.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.958 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:12:51.958 11:50:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:12:51.958 00:12:51.958 --- 10.0.0.1 ping statistics --- 00:12:51.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.958 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:12:51.958 11:50:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.958 11:50:45 -- nvmf/common.sh@410 -- # return 0 00:12:51.958 11:50:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:51.959 11:50:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.959 11:50:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:51.959 11:50:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:51.959 11:50:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.959 11:50:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:51.959 11:50:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:51.959 11:50:45 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:51.959 11:50:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:51.959 11:50:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:51.959 11:50:45 -- common/autotest_common.sh@10 -- # set +x 00:12:51.959 11:50:45 -- nvmf/common.sh@469 -- # nvmfpid=1836363 00:12:51.959 11:50:45 -- nvmf/common.sh@470 -- # waitforlisten 1836363 00:12:51.959 11:50:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.959 11:50:45 -- common/autotest_common.sh@819 -- # '[' -z 1836363 ']' 00:12:51.959 11:50:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.959 11:50:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:51.959 11:50:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.959 11:50:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:51.959 11:50:45 -- common/autotest_common.sh@10 -- # set +x 00:12:52.220 [2024-06-10 11:50:45.763205] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:52.220 [2024-06-10 11:50:45.763289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.220 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.220 [2024-06-10 11:50:45.834624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.220 [2024-06-10 11:50:45.908018] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:52.220 [2024-06-10 11:50:45.908152] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.220 [2024-06-10 11:50:45.908162] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.220 [2024-06-10 11:50:45.908170] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.220 [2024-06-10 11:50:45.908339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.220 [2024-06-10 11:50:45.908600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.220 [2024-06-10 11:50:45.908757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.220 [2024-06-10 11:50:45.908757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.791 11:50:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:52.791 11:50:46 -- common/autotest_common.sh@852 -- # return 0 00:12:52.791 11:50:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:52.791 11:50:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:52.791 11:50:46 -- common/autotest_common.sh@10 -- # set +x 00:12:53.052 11:50:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.052 11:50:46 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:53.052 11:50:46 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.052 11:50:46 -- target/multitarget.sh@21 -- # jq length 00:12:53.052 11:50:46 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:53.052 11:50:46 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:53.052 "nvmf_tgt_1" 00:12:53.052 11:50:46 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:53.313 "nvmf_tgt_2" 00:12:53.313 11:50:46 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.313 11:50:46 -- target/multitarget.sh@28 -- # jq length 00:12:53.313 11:50:46 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:53.313 11:50:46 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:53.313 true 00:12:53.313 11:50:47 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:53.575 true 00:12:53.575 11:50:47 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.575 11:50:47 -- target/multitarget.sh@35 -- # jq length 00:12:53.575 11:50:47 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:53.575 11:50:47 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:53.575 11:50:47 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:53.575 11:50:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:53.575 11:50:47 -- nvmf/common.sh@116 -- # sync 00:12:53.575 11:50:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:53.575 11:50:47 -- nvmf/common.sh@119 -- # set +e 00:12:53.575 11:50:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:53.575 11:50:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:53.575 rmmod nvme_tcp 00:12:53.575 rmmod nvme_fabrics 00:12:53.575 rmmod nvme_keyring 00:12:53.575 11:50:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:53.575 11:50:47 -- nvmf/common.sh@123 -- # set -e 00:12:53.575 11:50:47 -- nvmf/common.sh@124 -- # return 0 00:12:53.575 11:50:47 -- nvmf/common.sh@477 -- # '[' -n 1836363 ']' 00:12:53.575 11:50:47 -- nvmf/common.sh@478 -- # killprocess 1836363 00:12:53.575 11:50:47 -- common/autotest_common.sh@926 -- # '[' -z 1836363 ']' 00:12:53.575 11:50:47 -- common/autotest_common.sh@930 -- # kill -0 1836363 00:12:53.575 11:50:47 -- common/autotest_common.sh@931 -- # uname 00:12:53.575 11:50:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:53.575 11:50:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1836363 00:12:53.836 11:50:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:53.836 11:50:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:53.836 11:50:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1836363' 00:12:53.836 killing process with pid 1836363 00:12:53.836 11:50:47 -- common/autotest_common.sh@945 -- # kill 1836363 00:12:53.836 11:50:47 -- common/autotest_common.sh@950 -- # wait 1836363 00:12:53.836 11:50:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:53.836 11:50:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:53.836 11:50:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:53.836 11:50:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.836 11:50:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:53.836 11:50:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.836 11:50:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.836 11:50:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.383 11:50:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:56.383 00:12:56.383 real 0m11.234s 00:12:56.383 user 0m9.144s 00:12:56.383 sys 0m5.765s 00:12:56.383 11:50:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.383 11:50:49 -- common/autotest_common.sh@10 -- # set +x 00:12:56.383 ************************************ 00:12:56.383 END TEST nvmf_multitarget 00:12:56.383 ************************************ 00:12:56.383 11:50:49 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:56.383 11:50:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:56.383 11:50:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.383 11:50:49 -- common/autotest_common.sh@10 -- # set +x 00:12:56.383 ************************************ 00:12:56.383 START TEST nvmf_rpc 00:12:56.383 ************************************ 00:12:56.383 11:50:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:56.383 * Looking for test storage... 00:12:56.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.383 11:50:49 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.383 11:50:49 -- nvmf/common.sh@7 -- # uname -s 00:12:56.383 11:50:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.383 11:50:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.383 11:50:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.383 11:50:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.383 11:50:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.383 11:50:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.383 11:50:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.383 11:50:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.383 11:50:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.383 11:50:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.383 11:50:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:56.383 11:50:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:56.383 11:50:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.383 11:50:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.383 11:50:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.383 11:50:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.383 11:50:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.383 11:50:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.383 11:50:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.383 11:50:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.383 11:50:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.383 11:50:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.383 11:50:49 -- paths/export.sh@5 -- # export PATH 00:12:56.383 11:50:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.383 11:50:49 -- nvmf/common.sh@46 -- # : 0 00:12:56.383 11:50:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:56.383 11:50:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:56.384 11:50:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:56.384 11:50:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.384 11:50:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.384 11:50:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:56.384 11:50:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:56.384 11:50:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:56.384 11:50:49 -- target/rpc.sh@11 -- # loops=5 00:12:56.384 11:50:49 -- target/rpc.sh@23 -- # nvmftestinit 00:12:56.384 11:50:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:56.384 11:50:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.384 11:50:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:56.384 11:50:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:56.384 11:50:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:56.384 11:50:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.384 11:50:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.384 11:50:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.384 11:50:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:56.384 11:50:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:56.384 11:50:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:56.384 11:50:49 -- common/autotest_common.sh@10 -- # set +x 00:13:02.977 11:50:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:02.977 11:50:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:02.977 11:50:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:02.977 11:50:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:02.977 11:50:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:02.977 11:50:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:02.977 11:50:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:02.977 11:50:56 -- nvmf/common.sh@294 -- # net_devs=() 00:13:02.977 11:50:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:02.977 11:50:56 -- nvmf/common.sh@295 -- # e810=() 00:13:02.977 11:50:56 -- nvmf/common.sh@295 -- # local -ga e810 00:13:02.977 11:50:56 -- nvmf/common.sh@296 -- # x722=() 00:13:02.977 11:50:56 -- nvmf/common.sh@296 -- # local -ga x722 00:13:02.977 11:50:56 -- nvmf/common.sh@297 -- # mlx=() 00:13:02.977 11:50:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:02.977 11:50:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.977 11:50:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:02.977 11:50:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:02.977 11:50:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:02.977 11:50:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.977 11:50:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:02.977 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:02.977 11:50:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.977 11:50:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:02.977 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:02.977 11:50:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:02.977 11:50:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.977 11:50:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.977 11:50:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.977 11:50:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.977 11:50:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:02.977 Found net devices under 0000:31:00.0: cvl_0_0 00:13:02.977 11:50:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.977 11:50:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.977 11:50:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.977 11:50:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.977 11:50:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.977 11:50:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:02.977 Found net devices under 0000:31:00.1: cvl_0_1 00:13:02.977 11:50:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.977 11:50:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:02.977 11:50:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:02.977 11:50:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:02.977 11:50:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:02.977 11:50:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.977 11:50:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.977 11:50:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.977 11:50:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:02.977 11:50:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.977 11:50:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.977 11:50:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:02.977 11:50:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.977 11:50:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.977 11:50:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:02.977 11:50:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:02.977 11:50:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.977 11:50:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.239 11:50:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.239 11:50:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.239 11:50:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:03.239 11:50:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.239 11:50:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.239 11:50:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.239 11:50:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:03.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:13:03.239 00:13:03.239 --- 10.0.0.2 ping statistics --- 00:13:03.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.239 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:13:03.239 11:50:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:13:03.239 00:13:03.239 --- 10.0.0.1 ping statistics --- 00:13:03.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.239 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:13:03.239 11:50:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.239 11:50:56 -- nvmf/common.sh@410 -- # return 0 00:13:03.239 11:50:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:03.239 11:50:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.239 11:50:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:03.239 11:50:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:03.239 11:50:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.239 11:50:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:03.239 11:50:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:03.499 11:50:57 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:03.499 11:50:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:03.499 11:50:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:03.499 11:50:57 -- common/autotest_common.sh@10 -- # set +x 00:13:03.499 11:50:57 -- nvmf/common.sh@469 -- # nvmfpid=1840922 00:13:03.499 11:50:57 -- nvmf/common.sh@470 -- # waitforlisten 1840922 00:13:03.499 11:50:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.500 11:50:57 -- common/autotest_common.sh@819 -- # '[' -z 1840922 ']' 00:13:03.500 11:50:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.500 11:50:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:03.500 11:50:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.500 11:50:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:03.500 11:50:57 -- common/autotest_common.sh@10 -- # set +x 00:13:03.500 [2024-06-10 11:50:57.073780] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:03.500 [2024-06-10 11:50:57.073841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.500 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.500 [2024-06-10 11:50:57.144821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.500 [2024-06-10 11:50:57.218163] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:03.500 [2024-06-10 11:50:57.218304] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.500 [2024-06-10 11:50:57.218314] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.500 [2024-06-10 11:50:57.218322] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.500 [2024-06-10 11:50:57.218497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.500 [2024-06-10 11:50:57.218614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.500 [2024-06-10 11:50:57.218771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.500 [2024-06-10 11:50:57.218772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.441 11:50:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:04.441 11:50:57 -- common/autotest_common.sh@852 -- # return 0 00:13:04.441 11:50:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:04.441 11:50:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:04.441 11:50:57 -- common/autotest_common.sh@10 -- # set +x 00:13:04.441 11:50:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.441 11:50:57 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:04.441 11:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.441 11:50:57 -- common/autotest_common.sh@10 -- # set +x 00:13:04.441 11:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.441 11:50:57 -- target/rpc.sh@26 -- # stats='{ 00:13:04.441 "tick_rate": 2400000000, 00:13:04.441 "poll_groups": [ 00:13:04.441 { 00:13:04.441 "name": "nvmf_tgt_poll_group_0", 00:13:04.441 "admin_qpairs": 0, 00:13:04.441 "io_qpairs": 0, 00:13:04.441 "current_admin_qpairs": 0, 00:13:04.441 "current_io_qpairs": 0, 00:13:04.441 "pending_bdev_io": 0, 00:13:04.441 "completed_nvme_io": 0, 00:13:04.441 "transports": [] 00:13:04.441 }, 00:13:04.441 { 00:13:04.441 "name": "nvmf_tgt_poll_group_1", 00:13:04.441 "admin_qpairs": 0, 00:13:04.441 "io_qpairs": 0, 00:13:04.441 "current_admin_qpairs": 0, 00:13:04.441 "current_io_qpairs": 0, 00:13:04.441 "pending_bdev_io": 0, 00:13:04.441 "completed_nvme_io": 0, 00:13:04.441 "transports": [] 00:13:04.441 }, 00:13:04.441 { 00:13:04.441 "name": "nvmf_tgt_poll_group_2", 00:13:04.441 "admin_qpairs": 0, 00:13:04.441 "io_qpairs": 0, 00:13:04.441 "current_admin_qpairs": 0, 00:13:04.441 "current_io_qpairs": 0, 00:13:04.441 "pending_bdev_io": 0, 00:13:04.441 "completed_nvme_io": 0, 00:13:04.441 "transports": [] 00:13:04.441 }, 00:13:04.441 { 00:13:04.441 "name": "nvmf_tgt_poll_group_3", 00:13:04.441 "admin_qpairs": 0, 00:13:04.442 "io_qpairs": 0, 00:13:04.442 "current_admin_qpairs": 0, 00:13:04.442 "current_io_qpairs": 0, 00:13:04.442 "pending_bdev_io": 0, 00:13:04.442 "completed_nvme_io": 0, 00:13:04.442 "transports": [] 00:13:04.442 } 00:13:04.442 ] 00:13:04.442 }' 00:13:04.442 11:50:57 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:04.442 11:50:57 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:04.442 11:50:57 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:04.442 11:50:57 -- target/rpc.sh@15 -- # wc -l 00:13:04.442 11:50:57 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:04.442 11:50:57 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:04.442 11:50:58 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:04.442 11:50:58 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.442 11:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.442 11:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.442 [2024-06-10 11:50:58.009759] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.442 11:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.442 11:50:58 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:04.442 11:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.442 11:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.442 11:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.442 11:50:58 -- target/rpc.sh@33 -- # stats='{ 00:13:04.442 "tick_rate": 2400000000, 00:13:04.442 "poll_groups": [ 00:13:04.442 { 00:13:04.442 "name": "nvmf_tgt_poll_group_0", 00:13:04.442 "admin_qpairs": 0, 00:13:04.442 "io_qpairs": 0, 00:13:04.442 "current_admin_qpairs": 0, 00:13:04.442 "current_io_qpairs": 0, 00:13:04.442 "pending_bdev_io": 0, 00:13:04.442 "completed_nvme_io": 0, 00:13:04.442 "transports": [ 00:13:04.442 { 00:13:04.442 "trtype": "TCP" 00:13:04.442 } 00:13:04.442 ] 00:13:04.442 }, 00:13:04.442 { 00:13:04.442 "name": "nvmf_tgt_poll_group_1", 00:13:04.442 "admin_qpairs": 0, 00:13:04.442 "io_qpairs": 0, 00:13:04.442 "current_admin_qpairs": 0, 00:13:04.442 "current_io_qpairs": 0, 00:13:04.442 "pending_bdev_io": 0, 00:13:04.442 "completed_nvme_io": 0, 00:13:04.442 "transports": [ 00:13:04.442 { 00:13:04.442 "trtype": "TCP" 00:13:04.442 } 00:13:04.442 ] 00:13:04.442 }, 00:13:04.442 { 00:13:04.442 "name": "nvmf_tgt_poll_group_2", 00:13:04.442 "admin_qpairs": 0, 00:13:04.442 "io_qpairs": 0, 00:13:04.442 "current_admin_qpairs": 0, 00:13:04.442 "current_io_qpairs": 0, 00:13:04.442 "pending_bdev_io": 0, 00:13:04.442 "completed_nvme_io": 0, 00:13:04.442 "transports": [ 00:13:04.442 { 00:13:04.442 "trtype": "TCP" 00:13:04.442 } 00:13:04.442 ] 00:13:04.442 }, 00:13:04.442 { 00:13:04.442 "name": "nvmf_tgt_poll_group_3", 00:13:04.442 "admin_qpairs": 0, 00:13:04.442 "io_qpairs": 0, 00:13:04.442 "current_admin_qpairs": 0, 00:13:04.442 "current_io_qpairs": 0, 00:13:04.442 "pending_bdev_io": 0, 00:13:04.442 "completed_nvme_io": 0, 00:13:04.442 "transports": [ 00:13:04.442 { 00:13:04.442 "trtype": "TCP" 00:13:04.442 } 00:13:04.442 ] 00:13:04.442 } 00:13:04.442 ] 00:13:04.442 }' 00:13:04.442 11:50:58 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:04.442 11:50:58 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:04.442 11:50:58 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:04.442 11:50:58 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.442 11:50:58 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:04.442 11:50:58 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:04.442 11:50:58 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:04.442 11:50:58 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:04.442 11:50:58 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.442 11:50:58 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:04.442 11:50:58 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:04.442 11:50:58 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:04.442 11:50:58 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:04.442 11:50:58 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:04.442 11:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.442 11:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.442 Malloc1 00:13:04.442 11:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.442 11:50:58 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:04.442 11:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.442 11:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.442 11:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.442 11:50:58 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.442 11:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.442 11:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.442 11:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.442 11:50:58 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:04.442 11:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.442 11:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.442 11:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.442 11:50:58 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.442 11:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.442 11:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.442 [2024-06-10 11:50:58.195094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.442 11:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.442 11:50:58 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:04.442 11:50:58 -- common/autotest_common.sh@640 -- # local es=0 00:13:04.442 11:50:58 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:04.442 11:50:58 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:04.442 11:50:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:04.442 11:50:58 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:04.442 11:50:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:04.442 11:50:58 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:04.442 11:50:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:04.442 11:50:58 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:04.442 11:50:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:04.442 11:50:58 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:04.703 [2024-06-10 11:50:58.221799] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:04.703 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:04.703 could not add new controller: failed to write to nvme-fabrics device 00:13:04.703 11:50:58 -- common/autotest_common.sh@643 -- # es=1 00:13:04.703 11:50:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:04.703 11:50:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:04.703 11:50:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:04.703 11:50:58 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:04.703 11:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.703 11:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.703 11:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.703 11:50:58 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.088 11:50:59 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.088 11:50:59 -- common/autotest_common.sh@1177 -- # local i=0 00:13:06.088 11:50:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.088 11:50:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:06.088 11:50:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:08.001 11:51:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:08.001 11:51:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:08.001 11:51:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.001 11:51:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:08.001 11:51:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.001 11:51:01 -- common/autotest_common.sh@1187 -- # return 0 00:13:08.001 11:51:01 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.264 11:51:01 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.264 11:51:01 -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.264 11:51:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:08.264 11:51:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.264 11:51:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:08.264 11:51:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.264 11:51:01 -- common/autotest_common.sh@1210 -- # return 0 00:13:08.264 11:51:01 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:08.264 11:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.264 11:51:01 -- common/autotest_common.sh@10 -- # set +x 00:13:08.264 11:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.264 11:51:01 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.264 11:51:01 -- common/autotest_common.sh@640 -- # local es=0 00:13:08.264 11:51:01 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.264 11:51:01 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:08.264 11:51:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:08.264 11:51:01 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:08.264 11:51:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:08.264 11:51:01 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:08.264 11:51:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:08.264 11:51:01 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:08.264 11:51:01 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:08.264 11:51:01 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.264 [2024-06-10 11:51:01.867697] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:08.264 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:08.264 could not add new controller: failed to write to nvme-fabrics device 00:13:08.264 11:51:01 -- common/autotest_common.sh@643 -- # es=1 00:13:08.264 11:51:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:08.264 11:51:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:08.264 11:51:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:08.264 11:51:01 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:08.264 11:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.264 11:51:01 -- common/autotest_common.sh@10 -- # set +x 00:13:08.264 11:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.264 11:51:01 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.667 11:51:03 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.667 11:51:03 -- common/autotest_common.sh@1177 -- # local i=0 00:13:09.667 11:51:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.667 11:51:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:09.667 11:51:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:11.584 11:51:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:11.584 11:51:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:11.584 11:51:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.584 11:51:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:11.584 11:51:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.584 11:51:05 -- common/autotest_common.sh@1187 -- # return 0 00:13:11.584 11:51:05 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.846 11:51:05 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.846 11:51:05 -- common/autotest_common.sh@1198 -- # local i=0 00:13:11.846 11:51:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:11.846 11:51:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.846 11:51:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:11.846 11:51:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.846 11:51:05 -- common/autotest_common.sh@1210 -- # return 0 00:13:11.846 11:51:05 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.846 11:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.846 11:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:11.846 11:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.846 11:51:05 -- target/rpc.sh@81 -- # seq 1 5 00:13:11.846 11:51:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:11.846 11:51:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.847 11:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.847 11:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:11.847 11:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.847 11:51:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.847 11:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.847 11:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:11.847 [2024-06-10 11:51:05.477905] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.847 11:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.847 11:51:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:11.847 11:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.847 11:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:11.847 11:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.847 11:51:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.847 11:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.847 11:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:11.847 11:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.847 11:51:05 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.239 11:51:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.239 11:51:06 -- common/autotest_common.sh@1177 -- # local i=0 00:13:13.239 11:51:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.239 11:51:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:13.239 11:51:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:15.792 11:51:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:15.792 11:51:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:15.792 11:51:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.792 11:51:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:15.792 11:51:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.792 11:51:08 -- common/autotest_common.sh@1187 -- # return 0 00:13:15.792 11:51:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.792 11:51:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.792 11:51:09 -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.792 11:51:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:15.792 11:51:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.792 11:51:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:15.792 11:51:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.792 11:51:09 -- common/autotest_common.sh@1210 -- # return 0 00:13:15.792 11:51:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.792 11:51:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.792 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:13:15.792 11:51:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.792 11:51:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.792 11:51:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.792 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:13:15.792 11:51:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.792 11:51:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:15.792 11:51:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.792 11:51:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.792 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:13:15.792 11:51:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.792 11:51:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.792 11:51:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.792 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:13:15.792 [2024-06-10 11:51:09.141359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.792 11:51:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.792 11:51:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:15.792 11:51:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.792 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:13:15.792 11:51:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.792 11:51:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.792 11:51:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.792 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:13:15.792 11:51:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.792 11:51:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.178 11:51:10 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.178 11:51:10 -- common/autotest_common.sh@1177 -- # local i=0 00:13:17.178 11:51:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.178 11:51:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:17.178 11:51:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:19.093 11:51:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:19.093 11:51:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:19.093 11:51:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.093 11:51:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:19.093 11:51:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.093 11:51:12 -- common/autotest_common.sh@1187 -- # return 0 00:13:19.093 11:51:12 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.093 11:51:12 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.093 11:51:12 -- common/autotest_common.sh@1198 -- # local i=0 00:13:19.093 11:51:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:19.093 11:51:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.093 11:51:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:19.093 11:51:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.093 11:51:12 -- common/autotest_common.sh@1210 -- # return 0 00:13:19.093 11:51:12 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.093 11:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.093 11:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:19.093 11:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.093 11:51:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.093 11:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.093 11:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:19.093 11:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.093 11:51:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.093 11:51:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.093 11:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.093 11:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:19.093 11:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.093 11:51:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.093 11:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.093 11:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:19.093 [2024-06-10 11:51:12.809478] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.093 11:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.093 11:51:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.093 11:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.093 11:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:19.093 11:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.093 11:51:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.093 11:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.093 11:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:19.093 11:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.093 11:51:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.031 11:51:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.031 11:51:14 -- common/autotest_common.sh@1177 -- # local i=0 00:13:21.031 11:51:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.031 11:51:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:21.031 11:51:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:22.951 11:51:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:22.951 11:51:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:22.951 11:51:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.951 11:51:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:22.951 11:51:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.951 11:51:16 -- common/autotest_common.sh@1187 -- # return 0 00:13:22.951 11:51:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.951 11:51:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.951 11:51:16 -- common/autotest_common.sh@1198 -- # local i=0 00:13:22.951 11:51:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:22.951 11:51:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.951 11:51:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:22.951 11:51:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.951 11:51:16 -- common/autotest_common.sh@1210 -- # return 0 00:13:22.951 11:51:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.951 11:51:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.951 11:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:22.951 11:51:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.951 11:51:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.951 11:51:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.951 11:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:22.951 11:51:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.951 11:51:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:22.951 11:51:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.951 11:51:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.951 11:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:22.951 11:51:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.951 11:51:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.951 11:51:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.951 11:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:22.951 [2024-06-10 11:51:16.508164] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.951 11:51:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.951 11:51:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:22.951 11:51:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.951 11:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:22.951 11:51:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.951 11:51:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.951 11:51:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.951 11:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:22.951 11:51:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.951 11:51:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.336 11:51:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.336 11:51:17 -- common/autotest_common.sh@1177 -- # local i=0 00:13:24.336 11:51:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.336 11:51:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:24.336 11:51:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:26.251 11:51:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:26.251 11:51:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:26.251 11:51:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.251 11:51:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:26.251 11:51:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.251 11:51:19 -- common/autotest_common.sh@1187 -- # return 0 00:13:26.251 11:51:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.511 11:51:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.511 11:51:20 -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.511 11:51:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:26.511 11:51:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.511 11:51:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:26.511 11:51:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.511 11:51:20 -- common/autotest_common.sh@1210 -- # return 0 00:13:26.511 11:51:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.511 11:51:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.511 11:51:20 -- common/autotest_common.sh@10 -- # set +x 00:13:26.511 11:51:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.511 11:51:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.511 11:51:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.511 11:51:20 -- common/autotest_common.sh@10 -- # set +x 00:13:26.511 11:51:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.511 11:51:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:26.511 11:51:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.511 11:51:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.511 11:51:20 -- common/autotest_common.sh@10 -- # set +x 00:13:26.511 11:51:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.511 11:51:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.511 11:51:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.511 11:51:20 -- common/autotest_common.sh@10 -- # set +x 00:13:26.511 [2024-06-10 11:51:20.175095] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.511 11:51:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.511 11:51:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:26.511 11:51:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.511 11:51:20 -- common/autotest_common.sh@10 -- # set +x 00:13:26.511 11:51:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.511 11:51:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.511 11:51:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.511 11:51:20 -- common/autotest_common.sh@10 -- # set +x 00:13:26.511 11:51:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.511 11:51:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.424 11:51:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.424 11:51:21 -- common/autotest_common.sh@1177 -- # local i=0 00:13:28.424 11:51:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.424 11:51:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:28.424 11:51:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:30.336 11:51:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:30.336 11:51:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:30.336 11:51:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.336 11:51:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:30.336 11:51:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.336 11:51:23 -- common/autotest_common.sh@1187 -- # return 0 00:13:30.336 11:51:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.336 11:51:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.336 11:51:23 -- common/autotest_common.sh@1198 -- # local i=0 00:13:30.336 11:51:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:30.336 11:51:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.336 11:51:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:30.336 11:51:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.336 11:51:23 -- common/autotest_common.sh@1210 -- # return 0 00:13:30.336 11:51:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:30.336 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.336 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.336 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.336 11:51:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.336 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.336 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@99 -- # seq 1 5 00:13:30.337 11:51:23 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.337 11:51:23 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 [2024-06-10 11:51:23.874146] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.337 11:51:23 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 [2024-06-10 11:51:23.930278] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.337 11:51:23 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 [2024-06-10 11:51:23.990455] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.337 11:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:23 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.337 11:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.337 11:51:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 [2024-06-10 11:51:24.050639] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.337 11:51:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.337 11:51:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.337 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.337 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 [2024-06-10 11:51:24.106797] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.598 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.598 11:51:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.598 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.598 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.598 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.598 11:51:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.598 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.598 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.598 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.598 11:51:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.598 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.598 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.598 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.598 11:51:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.598 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.598 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.598 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.598 11:51:24 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:30.598 11:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.598 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.598 11:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.598 11:51:24 -- target/rpc.sh@110 -- # stats='{ 00:13:30.598 "tick_rate": 2400000000, 00:13:30.598 "poll_groups": [ 00:13:30.598 { 00:13:30.598 "name": "nvmf_tgt_poll_group_0", 00:13:30.598 "admin_qpairs": 0, 00:13:30.598 "io_qpairs": 224, 00:13:30.598 "current_admin_qpairs": 0, 00:13:30.598 "current_io_qpairs": 0, 00:13:30.598 "pending_bdev_io": 0, 00:13:30.598 "completed_nvme_io": 228, 00:13:30.598 "transports": [ 00:13:30.598 { 00:13:30.598 "trtype": "TCP" 00:13:30.598 } 00:13:30.598 ] 00:13:30.598 }, 00:13:30.598 { 00:13:30.598 "name": "nvmf_tgt_poll_group_1", 00:13:30.598 "admin_qpairs": 1, 00:13:30.598 "io_qpairs": 223, 00:13:30.598 "current_admin_qpairs": 0, 00:13:30.598 "current_io_qpairs": 0, 00:13:30.598 "pending_bdev_io": 0, 00:13:30.598 "completed_nvme_io": 228, 00:13:30.598 "transports": [ 00:13:30.598 { 00:13:30.598 "trtype": "TCP" 00:13:30.598 } 00:13:30.598 ] 00:13:30.598 }, 00:13:30.598 { 00:13:30.598 "name": "nvmf_tgt_poll_group_2", 00:13:30.598 "admin_qpairs": 6, 00:13:30.598 "io_qpairs": 218, 00:13:30.598 "current_admin_qpairs": 0, 00:13:30.598 "current_io_qpairs": 0, 00:13:30.598 "pending_bdev_io": 0, 00:13:30.598 "completed_nvme_io": 265, 00:13:30.598 "transports": [ 00:13:30.598 { 00:13:30.598 "trtype": "TCP" 00:13:30.598 } 00:13:30.598 ] 00:13:30.598 }, 00:13:30.598 { 00:13:30.598 "name": "nvmf_tgt_poll_group_3", 00:13:30.598 "admin_qpairs": 0, 00:13:30.598 "io_qpairs": 224, 00:13:30.598 "current_admin_qpairs": 0, 00:13:30.598 "current_io_qpairs": 0, 00:13:30.598 "pending_bdev_io": 0, 00:13:30.598 "completed_nvme_io": 518, 00:13:30.598 "transports": [ 00:13:30.598 { 00:13:30.598 "trtype": "TCP" 00:13:30.598 } 00:13:30.598 ] 00:13:30.598 } 00:13:30.598 ] 00:13:30.598 }' 00:13:30.598 11:51:24 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:30.598 11:51:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:30.598 11:51:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:30.598 11:51:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:30.598 11:51:24 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:30.598 11:51:24 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:30.598 11:51:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:30.598 11:51:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:30.598 11:51:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:30.598 11:51:24 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:30.598 11:51:24 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:30.598 11:51:24 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:30.598 11:51:24 -- target/rpc.sh@123 -- # nvmftestfini 00:13:30.598 11:51:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:30.598 11:51:24 -- nvmf/common.sh@116 -- # sync 00:13:30.599 11:51:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:30.599 11:51:24 -- nvmf/common.sh@119 -- # set +e 00:13:30.599 11:51:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:30.599 11:51:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:30.599 rmmod nvme_tcp 00:13:30.599 rmmod nvme_fabrics 00:13:30.599 rmmod nvme_keyring 00:13:30.599 11:51:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:30.599 11:51:24 -- nvmf/common.sh@123 -- # set -e 00:13:30.599 11:51:24 -- nvmf/common.sh@124 -- # return 0 00:13:30.599 11:51:24 -- nvmf/common.sh@477 -- # '[' -n 1840922 ']' 00:13:30.599 11:51:24 -- nvmf/common.sh@478 -- # killprocess 1840922 00:13:30.599 11:51:24 -- common/autotest_common.sh@926 -- # '[' -z 1840922 ']' 00:13:30.599 11:51:24 -- common/autotest_common.sh@930 -- # kill -0 1840922 00:13:30.599 11:51:24 -- common/autotest_common.sh@931 -- # uname 00:13:30.599 11:51:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:30.599 11:51:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1840922 00:13:30.860 11:51:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:30.860 11:51:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:30.860 11:51:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1840922' 00:13:30.860 killing process with pid 1840922 00:13:30.860 11:51:24 -- common/autotest_common.sh@945 -- # kill 1840922 00:13:30.860 11:51:24 -- common/autotest_common.sh@950 -- # wait 1840922 00:13:30.860 11:51:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:30.860 11:51:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:30.860 11:51:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:30.860 11:51:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.860 11:51:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:30.860 11:51:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.860 11:51:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.860 11:51:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.408 11:51:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:33.408 00:13:33.408 real 0m36.976s 00:13:33.408 user 1m51.212s 00:13:33.408 sys 0m6.931s 00:13:33.408 11:51:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.408 11:51:26 -- common/autotest_common.sh@10 -- # set +x 00:13:33.408 ************************************ 00:13:33.408 END TEST nvmf_rpc 00:13:33.408 ************************************ 00:13:33.408 11:51:26 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:33.408 11:51:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:33.408 11:51:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:33.408 11:51:26 -- common/autotest_common.sh@10 -- # set +x 00:13:33.408 ************************************ 00:13:33.408 START TEST nvmf_invalid 00:13:33.408 ************************************ 00:13:33.408 11:51:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:33.408 * Looking for test storage... 00:13:33.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.408 11:51:26 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.408 11:51:26 -- nvmf/common.sh@7 -- # uname -s 00:13:33.408 11:51:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.408 11:51:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.408 11:51:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.408 11:51:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.408 11:51:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.408 11:51:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.408 11:51:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.408 11:51:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.408 11:51:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.408 11:51:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.408 11:51:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:33.408 11:51:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:33.408 11:51:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.408 11:51:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.408 11:51:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.408 11:51:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.408 11:51:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.408 11:51:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.408 11:51:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.408 11:51:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.408 11:51:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.408 11:51:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.408 11:51:26 -- paths/export.sh@5 -- # export PATH 00:13:33.408 11:51:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.408 11:51:26 -- nvmf/common.sh@46 -- # : 0 00:13:33.408 11:51:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:33.408 11:51:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:33.408 11:51:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:33.408 11:51:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.408 11:51:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.408 11:51:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:33.408 11:51:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:33.408 11:51:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:33.409 11:51:26 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:33.409 11:51:26 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.409 11:51:26 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:33.409 11:51:26 -- target/invalid.sh@14 -- # target=foobar 00:13:33.409 11:51:26 -- target/invalid.sh@16 -- # RANDOM=0 00:13:33.409 11:51:26 -- target/invalid.sh@34 -- # nvmftestinit 00:13:33.409 11:51:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:33.409 11:51:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.409 11:51:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:33.409 11:51:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:33.409 11:51:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:33.409 11:51:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.409 11:51:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.409 11:51:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.409 11:51:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:33.409 11:51:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:33.409 11:51:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:33.409 11:51:26 -- common/autotest_common.sh@10 -- # set +x 00:13:39.994 11:51:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:39.994 11:51:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:39.994 11:51:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:39.994 11:51:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:39.994 11:51:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:39.994 11:51:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:39.994 11:51:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:39.994 11:51:33 -- nvmf/common.sh@294 -- # net_devs=() 00:13:39.994 11:51:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:39.994 11:51:33 -- nvmf/common.sh@295 -- # e810=() 00:13:39.994 11:51:33 -- nvmf/common.sh@295 -- # local -ga e810 00:13:39.994 11:51:33 -- nvmf/common.sh@296 -- # x722=() 00:13:39.994 11:51:33 -- nvmf/common.sh@296 -- # local -ga x722 00:13:39.994 11:51:33 -- nvmf/common.sh@297 -- # mlx=() 00:13:39.994 11:51:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:39.994 11:51:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.994 11:51:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:39.994 11:51:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:39.994 11:51:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:39.994 11:51:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:39.994 11:51:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:39.994 11:51:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:39.994 11:51:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:39.994 11:51:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:39.995 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:39.995 11:51:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:39.995 11:51:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:39.995 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:39.995 11:51:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:39.995 11:51:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:39.995 11:51:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.995 11:51:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:39.995 11:51:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.995 11:51:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:39.995 Found net devices under 0000:31:00.0: cvl_0_0 00:13:39.995 11:51:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.995 11:51:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:39.995 11:51:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.995 11:51:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:39.995 11:51:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.995 11:51:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:39.995 Found net devices under 0000:31:00.1: cvl_0_1 00:13:39.995 11:51:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.995 11:51:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:39.995 11:51:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:39.995 11:51:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:39.995 11:51:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:39.995 11:51:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.995 11:51:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.995 11:51:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.995 11:51:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:39.995 11:51:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.995 11:51:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.995 11:51:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:39.995 11:51:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.995 11:51:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.995 11:51:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:39.995 11:51:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:39.995 11:51:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.995 11:51:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.255 11:51:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.255 11:51:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.255 11:51:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:40.255 11:51:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.255 11:51:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.255 11:51:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.515 11:51:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:40.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:13:40.515 00:13:40.515 --- 10.0.0.2 ping statistics --- 00:13:40.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.515 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:13:40.515 11:51:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:13:40.515 00:13:40.515 --- 10.0.0.1 ping statistics --- 00:13:40.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.515 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:13:40.516 11:51:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.516 11:51:34 -- nvmf/common.sh@410 -- # return 0 00:13:40.516 11:51:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:40.516 11:51:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.516 11:51:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:40.516 11:51:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:40.516 11:51:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.516 11:51:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:40.516 11:51:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:40.516 11:51:34 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:40.516 11:51:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:40.516 11:51:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:40.516 11:51:34 -- common/autotest_common.sh@10 -- # set +x 00:13:40.516 11:51:34 -- nvmf/common.sh@469 -- # nvmfpid=1851359 00:13:40.516 11:51:34 -- nvmf/common.sh@470 -- # waitforlisten 1851359 00:13:40.516 11:51:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.516 11:51:34 -- common/autotest_common.sh@819 -- # '[' -z 1851359 ']' 00:13:40.516 11:51:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.516 11:51:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:40.516 11:51:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.516 11:51:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:40.516 11:51:34 -- common/autotest_common.sh@10 -- # set +x 00:13:40.516 [2024-06-10 11:51:34.146988] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:40.516 [2024-06-10 11:51:34.147055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.516 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.516 [2024-06-10 11:51:34.219922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.776 [2024-06-10 11:51:34.292873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:40.776 [2024-06-10 11:51:34.293013] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.776 [2024-06-10 11:51:34.293023] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.776 [2024-06-10 11:51:34.293032] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.776 [2024-06-10 11:51:34.293181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.776 [2024-06-10 11:51:34.293313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.776 [2024-06-10 11:51:34.293372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.776 [2024-06-10 11:51:34.293373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.346 11:51:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:41.346 11:51:34 -- common/autotest_common.sh@852 -- # return 0 00:13:41.346 11:51:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:41.346 11:51:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:41.346 11:51:34 -- common/autotest_common.sh@10 -- # set +x 00:13:41.346 11:51:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.346 11:51:34 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:41.346 11:51:34 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21775 00:13:41.346 [2024-06-10 11:51:35.097765] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:41.606 11:51:35 -- target/invalid.sh@40 -- # out='request: 00:13:41.606 { 00:13:41.606 "nqn": "nqn.2016-06.io.spdk:cnode21775", 00:13:41.606 "tgt_name": "foobar", 00:13:41.606 "method": "nvmf_create_subsystem", 00:13:41.606 "req_id": 1 00:13:41.606 } 00:13:41.606 Got JSON-RPC error response 00:13:41.606 response: 00:13:41.607 { 00:13:41.607 "code": -32603, 00:13:41.607 "message": "Unable to find target foobar" 00:13:41.607 }' 00:13:41.607 11:51:35 -- target/invalid.sh@41 -- # [[ request: 00:13:41.607 { 00:13:41.607 "nqn": "nqn.2016-06.io.spdk:cnode21775", 00:13:41.607 "tgt_name": "foobar", 00:13:41.607 "method": "nvmf_create_subsystem", 00:13:41.607 "req_id": 1 00:13:41.607 } 00:13:41.607 Got JSON-RPC error response 00:13:41.607 response: 00:13:41.607 { 00:13:41.607 "code": -32603, 00:13:41.607 "message": "Unable to find target foobar" 00:13:41.607 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:41.607 11:51:35 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:41.607 11:51:35 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30677 00:13:41.607 [2024-06-10 11:51:35.270364] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30677: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:41.607 11:51:35 -- target/invalid.sh@45 -- # out='request: 00:13:41.607 { 00:13:41.607 "nqn": "nqn.2016-06.io.spdk:cnode30677", 00:13:41.607 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:41.607 "method": "nvmf_create_subsystem", 00:13:41.607 "req_id": 1 00:13:41.607 } 00:13:41.607 Got JSON-RPC error response 00:13:41.607 response: 00:13:41.607 { 00:13:41.607 "code": -32602, 00:13:41.607 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:41.607 }' 00:13:41.607 11:51:35 -- target/invalid.sh@46 -- # [[ request: 00:13:41.607 { 00:13:41.607 "nqn": "nqn.2016-06.io.spdk:cnode30677", 00:13:41.607 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:41.607 "method": "nvmf_create_subsystem", 00:13:41.607 "req_id": 1 00:13:41.607 } 00:13:41.607 Got JSON-RPC error response 00:13:41.607 response: 00:13:41.607 { 00:13:41.607 "code": -32602, 00:13:41.607 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:41.607 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:41.607 11:51:35 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:41.607 11:51:35 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13576 00:13:41.868 [2024-06-10 11:51:35.442959] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13576: invalid model number 'SPDK_Controller' 00:13:41.868 11:51:35 -- target/invalid.sh@50 -- # out='request: 00:13:41.868 { 00:13:41.868 "nqn": "nqn.2016-06.io.spdk:cnode13576", 00:13:41.868 "model_number": "SPDK_Controller\u001f", 00:13:41.868 "method": "nvmf_create_subsystem", 00:13:41.868 "req_id": 1 00:13:41.868 } 00:13:41.868 Got JSON-RPC error response 00:13:41.868 response: 00:13:41.868 { 00:13:41.868 "code": -32602, 00:13:41.868 "message": "Invalid MN SPDK_Controller\u001f" 00:13:41.868 }' 00:13:41.868 11:51:35 -- target/invalid.sh@51 -- # [[ request: 00:13:41.868 { 00:13:41.868 "nqn": "nqn.2016-06.io.spdk:cnode13576", 00:13:41.868 "model_number": "SPDK_Controller\u001f", 00:13:41.868 "method": "nvmf_create_subsystem", 00:13:41.868 "req_id": 1 00:13:41.868 } 00:13:41.868 Got JSON-RPC error response 00:13:41.868 response: 00:13:41.868 { 00:13:41.868 "code": -32602, 00:13:41.868 "message": "Invalid MN SPDK_Controller\u001f" 00:13:41.868 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:41.868 11:51:35 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:41.868 11:51:35 -- target/invalid.sh@19 -- # local length=21 ll 00:13:41.868 11:51:35 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:41.868 11:51:35 -- target/invalid.sh@21 -- # local chars 00:13:41.868 11:51:35 -- target/invalid.sh@22 -- # local string 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # printf %x 79 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # string+=O 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # printf %x 67 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # string+=C 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # printf %x 125 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # string+='}' 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # printf %x 118 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # string+=v 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # printf %x 82 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # string+=R 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # printf %x 44 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # string+=, 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # printf %x 112 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # string+=p 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # printf %x 58 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # string+=: 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.868 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.868 11:51:35 -- target/invalid.sh@25 -- # printf %x 62 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+='>' 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 110 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=n 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 100 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=d 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 97 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=a 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 115 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=s 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 52 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=4 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 103 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=g 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 52 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=4 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 61 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+== 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 70 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=F 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 37 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=% 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 119 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+=w 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # printf %x 61 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:41.869 11:51:35 -- target/invalid.sh@25 -- # string+== 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.869 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.869 11:51:35 -- target/invalid.sh@28 -- # [[ O == \- ]] 00:13:41.869 11:51:35 -- target/invalid.sh@31 -- # echo 'OC}vR,p:>ndas4g4=F%w=' 00:13:41.869 11:51:35 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'OC}vR,p:>ndas4g4=F%w=' nqn.2016-06.io.spdk:cnode2785 00:13:42.130 [2024-06-10 11:51:35.771978] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2785: invalid serial number 'OC}vR,p:>ndas4g4=F%w=' 00:13:42.130 11:51:35 -- target/invalid.sh@54 -- # out='request: 00:13:42.130 { 00:13:42.130 "nqn": "nqn.2016-06.io.spdk:cnode2785", 00:13:42.130 "serial_number": "OC}vR,p:>ndas4g4=F%w=", 00:13:42.130 "method": "nvmf_create_subsystem", 00:13:42.130 "req_id": 1 00:13:42.130 } 00:13:42.130 Got JSON-RPC error response 00:13:42.130 response: 00:13:42.130 { 00:13:42.130 "code": -32602, 00:13:42.130 "message": "Invalid SN OC}vR,p:>ndas4g4=F%w=" 00:13:42.130 }' 00:13:42.130 11:51:35 -- target/invalid.sh@55 -- # [[ request: 00:13:42.130 { 00:13:42.130 "nqn": "nqn.2016-06.io.spdk:cnode2785", 00:13:42.130 "serial_number": "OC}vR,p:>ndas4g4=F%w=", 00:13:42.130 "method": "nvmf_create_subsystem", 00:13:42.130 "req_id": 1 00:13:42.130 } 00:13:42.130 Got JSON-RPC error response 00:13:42.130 response: 00:13:42.130 { 00:13:42.130 "code": -32602, 00:13:42.130 "message": "Invalid SN OC}vR,p:>ndas4g4=F%w=" 00:13:42.130 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:42.130 11:51:35 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:42.130 11:51:35 -- target/invalid.sh@19 -- # local length=41 ll 00:13:42.130 11:51:35 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:42.130 11:51:35 -- target/invalid.sh@21 -- # local chars 00:13:42.130 11:51:35 -- target/invalid.sh@22 -- # local string 00:13:42.130 11:51:35 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:42.130 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.130 11:51:35 -- target/invalid.sh@25 -- # printf %x 116 00:13:42.130 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:42.130 11:51:35 -- target/invalid.sh@25 -- # string+=t 00:13:42.130 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.130 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.130 11:51:35 -- target/invalid.sh@25 -- # printf %x 101 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=e 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 46 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=. 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 100 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=d 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 65 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=A 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 82 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=R 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 113 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=q 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 47 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=/ 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 97 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=a 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 47 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=/ 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 87 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=W 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 102 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+=f 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # printf %x 33 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:42.131 11:51:35 -- target/invalid.sh@25 -- # string+='!' 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.131 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.391 11:51:35 -- target/invalid.sh@25 -- # printf %x 56 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=8 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 108 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=l 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 88 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=X 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 127 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 45 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=- 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 93 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=']' 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 35 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+='#' 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 44 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=, 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 95 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=_ 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 64 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=@ 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 99 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=c 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 36 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+='$' 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 45 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=- 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # printf %x 68 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:42.392 11:51:35 -- target/invalid.sh@25 -- # string+=D 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 105 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=i 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 99 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=c 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 122 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=z 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 119 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=w 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 124 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+='|' 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 121 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=y 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 62 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+='>' 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 85 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=U 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 127 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 84 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=T 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 99 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=c 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 94 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+='^' 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 111 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=o 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # printf %x 122 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:42.392 11:51:36 -- target/invalid.sh@25 -- # string+=z 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.392 11:51:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.392 11:51:36 -- target/invalid.sh@28 -- # [[ t == \- ]] 00:13:42.392 11:51:36 -- target/invalid.sh@31 -- # echo 'te.dARq/a/Wf!8lX-]#,_@c$-Diczw|y>UTc^oz' 00:13:42.392 11:51:36 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'te.dARq/a/Wf!8lX-]#,_@c$-Diczw|y>UTc^oz' nqn.2016-06.io.spdk:cnode31156 00:13:42.653 [2024-06-10 11:51:36.241523] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31156: invalid model number 'te.dARq/a/Wf!8lX-]#,_@c$-Diczw|y>UTc^oz' 00:13:42.653 11:51:36 -- target/invalid.sh@58 -- # out='request: 00:13:42.653 { 00:13:42.653 "nqn": "nqn.2016-06.io.spdk:cnode31156", 00:13:42.653 "model_number": "te.dARq/a/Wf!8lX\u007f-]#,_@c$-Diczw|y>U\u007fTc^oz", 00:13:42.653 "method": "nvmf_create_subsystem", 00:13:42.653 "req_id": 1 00:13:42.653 } 00:13:42.653 Got JSON-RPC error response 00:13:42.653 response: 00:13:42.653 { 00:13:42.653 "code": -32602, 00:13:42.653 "message": "Invalid MN te.dARq/a/Wf!8lX\u007f-]#,_@c$-Diczw|y>U\u007fTc^oz" 00:13:42.653 }' 00:13:42.653 11:51:36 -- target/invalid.sh@59 -- # [[ request: 00:13:42.653 { 00:13:42.653 "nqn": "nqn.2016-06.io.spdk:cnode31156", 00:13:42.653 "model_number": "te.dARq/a/Wf!8lX\u007f-]#,_@c$-Diczw|y>U\u007fTc^oz", 00:13:42.653 "method": "nvmf_create_subsystem", 00:13:42.653 "req_id": 1 00:13:42.653 } 00:13:42.653 Got JSON-RPC error response 00:13:42.653 response: 00:13:42.653 { 00:13:42.653 "code": -32602, 00:13:42.653 "message": "Invalid MN te.dARq/a/Wf!8lX\u007f-]#,_@c$-Diczw|y>U\u007fTc^oz" 00:13:42.653 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:42.653 11:51:36 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:42.653 [2024-06-10 11:51:36.406128] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.913 11:51:36 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:42.913 11:51:36 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:42.913 11:51:36 -- target/invalid.sh@67 -- # echo '' 00:13:42.913 11:51:36 -- target/invalid.sh@67 -- # head -n 1 00:13:42.913 11:51:36 -- target/invalid.sh@67 -- # IP= 00:13:42.913 11:51:36 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:43.174 [2024-06-10 11:51:36.744785] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:43.174 11:51:36 -- target/invalid.sh@69 -- # out='request: 00:13:43.174 { 00:13:43.174 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:43.174 "listen_address": { 00:13:43.174 "trtype": "tcp", 00:13:43.174 "traddr": "", 00:13:43.174 "trsvcid": "4421" 00:13:43.174 }, 00:13:43.174 "method": "nvmf_subsystem_remove_listener", 00:13:43.174 "req_id": 1 00:13:43.174 } 00:13:43.174 Got JSON-RPC error response 00:13:43.174 response: 00:13:43.174 { 00:13:43.174 "code": -32602, 00:13:43.174 "message": "Invalid parameters" 00:13:43.174 }' 00:13:43.174 11:51:36 -- target/invalid.sh@70 -- # [[ request: 00:13:43.174 { 00:13:43.174 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:43.174 "listen_address": { 00:13:43.174 "trtype": "tcp", 00:13:43.174 "traddr": "", 00:13:43.174 "trsvcid": "4421" 00:13:43.174 }, 00:13:43.174 "method": "nvmf_subsystem_remove_listener", 00:13:43.174 "req_id": 1 00:13:43.174 } 00:13:43.175 Got JSON-RPC error response 00:13:43.175 response: 00:13:43.175 { 00:13:43.175 "code": -32602, 00:13:43.175 "message": "Invalid parameters" 00:13:43.175 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:43.175 11:51:36 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9991 -i 0 00:13:43.175 [2024-06-10 11:51:36.897238] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9991: invalid cntlid range [0-65519] 00:13:43.175 11:51:36 -- target/invalid.sh@73 -- # out='request: 00:13:43.175 { 00:13:43.175 "nqn": "nqn.2016-06.io.spdk:cnode9991", 00:13:43.175 "min_cntlid": 0, 00:13:43.175 "method": "nvmf_create_subsystem", 00:13:43.175 "req_id": 1 00:13:43.175 } 00:13:43.175 Got JSON-RPC error response 00:13:43.175 response: 00:13:43.175 { 00:13:43.175 "code": -32602, 00:13:43.175 "message": "Invalid cntlid range [0-65519]" 00:13:43.175 }' 00:13:43.175 11:51:36 -- target/invalid.sh@74 -- # [[ request: 00:13:43.175 { 00:13:43.175 "nqn": "nqn.2016-06.io.spdk:cnode9991", 00:13:43.175 "min_cntlid": 0, 00:13:43.175 "method": "nvmf_create_subsystem", 00:13:43.175 "req_id": 1 00:13:43.175 } 00:13:43.175 Got JSON-RPC error response 00:13:43.175 response: 00:13:43.175 { 00:13:43.175 "code": -32602, 00:13:43.175 "message": "Invalid cntlid range [0-65519]" 00:13:43.175 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:43.175 11:51:36 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25248 -i 65520 00:13:43.435 [2024-06-10 11:51:37.065797] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25248: invalid cntlid range [65520-65519] 00:13:43.435 11:51:37 -- target/invalid.sh@75 -- # out='request: 00:13:43.435 { 00:13:43.435 "nqn": "nqn.2016-06.io.spdk:cnode25248", 00:13:43.435 "min_cntlid": 65520, 00:13:43.435 "method": "nvmf_create_subsystem", 00:13:43.435 "req_id": 1 00:13:43.435 } 00:13:43.435 Got JSON-RPC error response 00:13:43.435 response: 00:13:43.435 { 00:13:43.435 "code": -32602, 00:13:43.435 "message": "Invalid cntlid range [65520-65519]" 00:13:43.435 }' 00:13:43.435 11:51:37 -- target/invalid.sh@76 -- # [[ request: 00:13:43.435 { 00:13:43.435 "nqn": "nqn.2016-06.io.spdk:cnode25248", 00:13:43.435 "min_cntlid": 65520, 00:13:43.435 "method": "nvmf_create_subsystem", 00:13:43.435 "req_id": 1 00:13:43.435 } 00:13:43.435 Got JSON-RPC error response 00:13:43.435 response: 00:13:43.435 { 00:13:43.435 "code": -32602, 00:13:43.436 "message": "Invalid cntlid range [65520-65519]" 00:13:43.436 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:43.436 11:51:37 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25334 -I 0 00:13:43.696 [2024-06-10 11:51:37.218297] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25334: invalid cntlid range [1-0] 00:13:43.696 11:51:37 -- target/invalid.sh@77 -- # out='request: 00:13:43.696 { 00:13:43.696 "nqn": "nqn.2016-06.io.spdk:cnode25334", 00:13:43.696 "max_cntlid": 0, 00:13:43.696 "method": "nvmf_create_subsystem", 00:13:43.696 "req_id": 1 00:13:43.696 } 00:13:43.696 Got JSON-RPC error response 00:13:43.696 response: 00:13:43.696 { 00:13:43.696 "code": -32602, 00:13:43.696 "message": "Invalid cntlid range [1-0]" 00:13:43.696 }' 00:13:43.696 11:51:37 -- target/invalid.sh@78 -- # [[ request: 00:13:43.696 { 00:13:43.696 "nqn": "nqn.2016-06.io.spdk:cnode25334", 00:13:43.696 "max_cntlid": 0, 00:13:43.696 "method": "nvmf_create_subsystem", 00:13:43.696 "req_id": 1 00:13:43.696 } 00:13:43.696 Got JSON-RPC error response 00:13:43.696 response: 00:13:43.696 { 00:13:43.696 "code": -32602, 00:13:43.696 "message": "Invalid cntlid range [1-0]" 00:13:43.696 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:43.696 11:51:37 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29277 -I 65520 00:13:43.696 [2024-06-10 11:51:37.382832] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29277: invalid cntlid range [1-65520] 00:13:43.696 11:51:37 -- target/invalid.sh@79 -- # out='request: 00:13:43.696 { 00:13:43.696 "nqn": "nqn.2016-06.io.spdk:cnode29277", 00:13:43.696 "max_cntlid": 65520, 00:13:43.696 "method": "nvmf_create_subsystem", 00:13:43.696 "req_id": 1 00:13:43.696 } 00:13:43.696 Got JSON-RPC error response 00:13:43.696 response: 00:13:43.696 { 00:13:43.696 "code": -32602, 00:13:43.696 "message": "Invalid cntlid range [1-65520]" 00:13:43.696 }' 00:13:43.696 11:51:37 -- target/invalid.sh@80 -- # [[ request: 00:13:43.696 { 00:13:43.696 "nqn": "nqn.2016-06.io.spdk:cnode29277", 00:13:43.696 "max_cntlid": 65520, 00:13:43.696 "method": "nvmf_create_subsystem", 00:13:43.696 "req_id": 1 00:13:43.696 } 00:13:43.696 Got JSON-RPC error response 00:13:43.696 response: 00:13:43.696 { 00:13:43.696 "code": -32602, 00:13:43.696 "message": "Invalid cntlid range [1-65520]" 00:13:43.696 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:43.696 11:51:37 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10068 -i 6 -I 5 00:13:43.957 [2024-06-10 11:51:37.547387] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10068: invalid cntlid range [6-5] 00:13:43.957 11:51:37 -- target/invalid.sh@83 -- # out='request: 00:13:43.957 { 00:13:43.957 "nqn": "nqn.2016-06.io.spdk:cnode10068", 00:13:43.957 "min_cntlid": 6, 00:13:43.957 "max_cntlid": 5, 00:13:43.957 "method": "nvmf_create_subsystem", 00:13:43.957 "req_id": 1 00:13:43.957 } 00:13:43.957 Got JSON-RPC error response 00:13:43.957 response: 00:13:43.957 { 00:13:43.957 "code": -32602, 00:13:43.957 "message": "Invalid cntlid range [6-5]" 00:13:43.957 }' 00:13:43.957 11:51:37 -- target/invalid.sh@84 -- # [[ request: 00:13:43.957 { 00:13:43.957 "nqn": "nqn.2016-06.io.spdk:cnode10068", 00:13:43.957 "min_cntlid": 6, 00:13:43.957 "max_cntlid": 5, 00:13:43.957 "method": "nvmf_create_subsystem", 00:13:43.957 "req_id": 1 00:13:43.957 } 00:13:43.957 Got JSON-RPC error response 00:13:43.957 response: 00:13:43.957 { 00:13:43.957 "code": -32602, 00:13:43.957 "message": "Invalid cntlid range [6-5]" 00:13:43.957 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:43.957 11:51:37 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:43.957 11:51:37 -- target/invalid.sh@87 -- # out='request: 00:13:43.957 { 00:13:43.957 "name": "foobar", 00:13:43.957 "method": "nvmf_delete_target", 00:13:43.957 "req_id": 1 00:13:43.957 } 00:13:43.957 Got JSON-RPC error response 00:13:43.957 response: 00:13:43.957 { 00:13:43.957 "code": -32602, 00:13:43.957 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:43.957 }' 00:13:43.957 11:51:37 -- target/invalid.sh@88 -- # [[ request: 00:13:43.957 { 00:13:43.957 "name": "foobar", 00:13:43.957 "method": "nvmf_delete_target", 00:13:43.957 "req_id": 1 00:13:43.957 } 00:13:43.957 Got JSON-RPC error response 00:13:43.957 response: 00:13:43.957 { 00:13:43.957 "code": -32602, 00:13:43.957 "message": "The specified target doesn't exist, cannot delete it." 00:13:43.957 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:43.957 11:51:37 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:43.957 11:51:37 -- target/invalid.sh@91 -- # nvmftestfini 00:13:43.957 11:51:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:43.957 11:51:37 -- nvmf/common.sh@116 -- # sync 00:13:43.957 11:51:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:43.957 11:51:37 -- nvmf/common.sh@119 -- # set +e 00:13:43.957 11:51:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:43.957 11:51:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:43.957 rmmod nvme_tcp 00:13:43.957 rmmod nvme_fabrics 00:13:43.957 rmmod nvme_keyring 00:13:44.218 11:51:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:44.218 11:51:37 -- nvmf/common.sh@123 -- # set -e 00:13:44.218 11:51:37 -- nvmf/common.sh@124 -- # return 0 00:13:44.218 11:51:37 -- nvmf/common.sh@477 -- # '[' -n 1851359 ']' 00:13:44.218 11:51:37 -- nvmf/common.sh@478 -- # killprocess 1851359 00:13:44.218 11:51:37 -- common/autotest_common.sh@926 -- # '[' -z 1851359 ']' 00:13:44.218 11:51:37 -- common/autotest_common.sh@930 -- # kill -0 1851359 00:13:44.218 11:51:37 -- common/autotest_common.sh@931 -- # uname 00:13:44.218 11:51:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.218 11:51:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1851359 00:13:44.218 11:51:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:44.218 11:51:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:44.218 11:51:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1851359' 00:13:44.218 killing process with pid 1851359 00:13:44.218 11:51:37 -- common/autotest_common.sh@945 -- # kill 1851359 00:13:44.218 11:51:37 -- common/autotest_common.sh@950 -- # wait 1851359 00:13:44.218 11:51:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:44.218 11:51:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:44.218 11:51:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:44.218 11:51:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.218 11:51:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:44.218 11:51:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.218 11:51:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.218 11:51:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.764 11:51:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:46.764 00:13:46.764 real 0m13.349s 00:13:46.764 user 0m18.611s 00:13:46.764 sys 0m6.353s 00:13:46.764 11:51:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.764 11:51:39 -- common/autotest_common.sh@10 -- # set +x 00:13:46.764 ************************************ 00:13:46.764 END TEST nvmf_invalid 00:13:46.764 ************************************ 00:13:46.764 11:51:40 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:46.764 11:51:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:46.764 11:51:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:46.764 11:51:40 -- common/autotest_common.sh@10 -- # set +x 00:13:46.764 ************************************ 00:13:46.764 START TEST nvmf_abort 00:13:46.764 ************************************ 00:13:46.764 11:51:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:46.764 * Looking for test storage... 00:13:46.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.764 11:51:40 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.764 11:51:40 -- nvmf/common.sh@7 -- # uname -s 00:13:46.764 11:51:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.764 11:51:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.764 11:51:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.764 11:51:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.764 11:51:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.764 11:51:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.764 11:51:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.764 11:51:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.764 11:51:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.764 11:51:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.764 11:51:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:46.764 11:51:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:46.764 11:51:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.764 11:51:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.764 11:51:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.764 11:51:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.764 11:51:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.764 11:51:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.764 11:51:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.764 11:51:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.764 11:51:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.764 11:51:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.764 11:51:40 -- paths/export.sh@5 -- # export PATH 00:13:46.764 11:51:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.764 11:51:40 -- nvmf/common.sh@46 -- # : 0 00:13:46.764 11:51:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:46.764 11:51:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:46.764 11:51:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:46.764 11:51:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.764 11:51:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.764 11:51:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:46.764 11:51:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:46.764 11:51:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:46.764 11:51:40 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:46.764 11:51:40 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:46.764 11:51:40 -- target/abort.sh@14 -- # nvmftestinit 00:13:46.764 11:51:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:46.764 11:51:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.764 11:51:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:46.764 11:51:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:46.764 11:51:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:46.764 11:51:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.764 11:51:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.764 11:51:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.764 11:51:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:46.764 11:51:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:46.764 11:51:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:46.764 11:51:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.356 11:51:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:53.356 11:51:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:53.356 11:51:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:53.356 11:51:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:53.356 11:51:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:53.356 11:51:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:53.356 11:51:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:53.356 11:51:47 -- nvmf/common.sh@294 -- # net_devs=() 00:13:53.356 11:51:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:53.356 11:51:47 -- nvmf/common.sh@295 -- # e810=() 00:13:53.356 11:51:47 -- nvmf/common.sh@295 -- # local -ga e810 00:13:53.356 11:51:47 -- nvmf/common.sh@296 -- # x722=() 00:13:53.356 11:51:47 -- nvmf/common.sh@296 -- # local -ga x722 00:13:53.356 11:51:47 -- nvmf/common.sh@297 -- # mlx=() 00:13:53.356 11:51:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:53.356 11:51:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.356 11:51:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:53.356 11:51:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:53.356 11:51:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:53.356 11:51:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:53.356 11:51:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:53.356 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:53.356 11:51:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:53.356 11:51:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:53.356 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:53.356 11:51:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:53.356 11:51:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:53.356 11:51:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.356 11:51:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:53.356 11:51:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.356 11:51:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:53.356 Found net devices under 0000:31:00.0: cvl_0_0 00:13:53.356 11:51:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.356 11:51:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:53.356 11:51:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.356 11:51:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:53.356 11:51:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.356 11:51:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:53.356 Found net devices under 0000:31:00.1: cvl_0_1 00:13:53.356 11:51:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.356 11:51:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:53.356 11:51:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:53.356 11:51:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:53.356 11:51:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:53.356 11:51:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.356 11:51:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.356 11:51:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.356 11:51:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:53.356 11:51:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.356 11:51:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.356 11:51:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:53.356 11:51:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.356 11:51:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.356 11:51:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:53.356 11:51:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:53.356 11:51:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.356 11:51:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.618 11:51:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.618 11:51:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.618 11:51:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:53.618 11:51:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.618 11:51:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.618 11:51:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.618 11:51:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:53.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:13:53.618 00:13:53.618 --- 10.0.0.2 ping statistics --- 00:13:53.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.618 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:13:53.618 11:51:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:13:53.618 00:13:53.618 --- 10.0.0.1 ping statistics --- 00:13:53.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.618 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:13:53.618 11:51:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.618 11:51:47 -- nvmf/common.sh@410 -- # return 0 00:13:53.618 11:51:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:53.618 11:51:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.618 11:51:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:53.618 11:51:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:53.618 11:51:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.618 11:51:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:53.618 11:51:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:53.879 11:51:47 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:53.879 11:51:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:53.879 11:51:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:53.879 11:51:47 -- common/autotest_common.sh@10 -- # set +x 00:13:53.879 11:51:47 -- nvmf/common.sh@469 -- # nvmfpid=1856606 00:13:53.879 11:51:47 -- nvmf/common.sh@470 -- # waitforlisten 1856606 00:13:53.879 11:51:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:53.879 11:51:47 -- common/autotest_common.sh@819 -- # '[' -z 1856606 ']' 00:13:53.879 11:51:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.879 11:51:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:53.879 11:51:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.879 11:51:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:53.879 11:51:47 -- common/autotest_common.sh@10 -- # set +x 00:13:53.879 [2024-06-10 11:51:47.456227] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:53.879 [2024-06-10 11:51:47.456294] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.879 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.879 [2024-06-10 11:51:47.544792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:53.879 [2024-06-10 11:51:47.636801] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:53.879 [2024-06-10 11:51:47.636970] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.879 [2024-06-10 11:51:47.636983] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.879 [2024-06-10 11:51:47.636990] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.879 [2024-06-10 11:51:47.637131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.879 [2024-06-10 11:51:47.637300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.879 [2024-06-10 11:51:47.637345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.822 11:51:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:54.822 11:51:48 -- common/autotest_common.sh@852 -- # return 0 00:13:54.822 11:51:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:54.822 11:51:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:54.822 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 11:51:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.822 11:51:48 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:54.822 11:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.822 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 [2024-06-10 11:51:48.279014] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.822 11:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.822 11:51:48 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:54.822 11:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.822 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 Malloc0 00:13:54.822 11:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.822 11:51:48 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:54.822 11:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.822 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 Delay0 00:13:54.822 11:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.822 11:51:48 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:54.822 11:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.822 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 11:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.822 11:51:48 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:54.822 11:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.822 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 11:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.822 11:51:48 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:54.822 11:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.822 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 [2024-06-10 11:51:48.335654] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.822 11:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.822 11:51:48 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.822 11:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.822 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 11:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.822 11:51:48 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:54.822 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.822 [2024-06-10 11:51:48.483453] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:57.367 Initializing NVMe Controllers 00:13:57.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:57.367 controller IO queue size 128 less than required 00:13:57.367 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:57.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:57.367 Initialization complete. Launching workers. 00:13:57.367 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33070 00:13:57.367 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33131, failed to submit 62 00:13:57.367 success 33070, unsuccess 61, failed 0 00:13:57.367 11:51:50 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:57.367 11:51:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.367 11:51:50 -- common/autotest_common.sh@10 -- # set +x 00:13:57.367 11:51:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.367 11:51:50 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:57.367 11:51:50 -- target/abort.sh@38 -- # nvmftestfini 00:13:57.367 11:51:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:57.367 11:51:50 -- nvmf/common.sh@116 -- # sync 00:13:57.367 11:51:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:57.367 11:51:50 -- nvmf/common.sh@119 -- # set +e 00:13:57.367 11:51:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:57.367 11:51:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:57.367 rmmod nvme_tcp 00:13:57.367 rmmod nvme_fabrics 00:13:57.367 rmmod nvme_keyring 00:13:57.367 11:51:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:57.367 11:51:50 -- nvmf/common.sh@123 -- # set -e 00:13:57.367 11:51:50 -- nvmf/common.sh@124 -- # return 0 00:13:57.367 11:51:50 -- nvmf/common.sh@477 -- # '[' -n 1856606 ']' 00:13:57.367 11:51:50 -- nvmf/common.sh@478 -- # killprocess 1856606 00:13:57.367 11:51:50 -- common/autotest_common.sh@926 -- # '[' -z 1856606 ']' 00:13:57.367 11:51:50 -- common/autotest_common.sh@930 -- # kill -0 1856606 00:13:57.367 11:51:50 -- common/autotest_common.sh@931 -- # uname 00:13:57.367 11:51:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:57.367 11:51:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1856606 00:13:57.367 11:51:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:57.367 11:51:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:57.367 11:51:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1856606' 00:13:57.367 killing process with pid 1856606 00:13:57.367 11:51:50 -- common/autotest_common.sh@945 -- # kill 1856606 00:13:57.367 11:51:50 -- common/autotest_common.sh@950 -- # wait 1856606 00:13:57.367 11:51:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:57.367 11:51:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:57.367 11:51:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:57.367 11:51:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.367 11:51:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:57.367 11:51:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.367 11:51:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.367 11:51:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.361 11:51:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:59.361 00:13:59.361 real 0m12.971s 00:13:59.361 user 0m13.834s 00:13:59.361 sys 0m6.204s 00:13:59.361 11:51:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.361 11:51:53 -- common/autotest_common.sh@10 -- # set +x 00:13:59.361 ************************************ 00:13:59.361 END TEST nvmf_abort 00:13:59.361 ************************************ 00:13:59.361 11:51:53 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:59.361 11:51:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:59.361 11:51:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:59.361 11:51:53 -- common/autotest_common.sh@10 -- # set +x 00:13:59.361 ************************************ 00:13:59.361 START TEST nvmf_ns_hotplug_stress 00:13:59.361 ************************************ 00:13:59.361 11:51:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:59.622 * Looking for test storage... 00:13:59.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.622 11:51:53 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.622 11:51:53 -- nvmf/common.sh@7 -- # uname -s 00:13:59.622 11:51:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.622 11:51:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.622 11:51:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.622 11:51:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.622 11:51:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.622 11:51:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.622 11:51:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.622 11:51:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.622 11:51:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.622 11:51:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.622 11:51:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:59.622 11:51:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:59.622 11:51:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.622 11:51:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.622 11:51:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.622 11:51:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.622 11:51:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.622 11:51:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.622 11:51:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.623 11:51:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.623 11:51:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.623 11:51:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.623 11:51:53 -- paths/export.sh@5 -- # export PATH 00:13:59.623 11:51:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.623 11:51:53 -- nvmf/common.sh@46 -- # : 0 00:13:59.623 11:51:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:59.623 11:51:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:59.623 11:51:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:59.623 11:51:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.623 11:51:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.623 11:51:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:59.623 11:51:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:59.623 11:51:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:59.623 11:51:53 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.623 11:51:53 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:59.623 11:51:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:59.623 11:51:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.623 11:51:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:59.623 11:51:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:59.623 11:51:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:59.623 11:51:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.623 11:51:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.623 11:51:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.623 11:51:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:59.623 11:51:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:59.623 11:51:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:59.623 11:51:53 -- common/autotest_common.sh@10 -- # set +x 00:14:06.222 11:51:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:06.222 11:51:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:06.222 11:51:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:06.222 11:51:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:06.222 11:51:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:06.222 11:51:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:06.222 11:51:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:06.222 11:51:59 -- nvmf/common.sh@294 -- # net_devs=() 00:14:06.222 11:51:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:06.222 11:51:59 -- nvmf/common.sh@295 -- # e810=() 00:14:06.222 11:51:59 -- nvmf/common.sh@295 -- # local -ga e810 00:14:06.222 11:51:59 -- nvmf/common.sh@296 -- # x722=() 00:14:06.222 11:51:59 -- nvmf/common.sh@296 -- # local -ga x722 00:14:06.222 11:51:59 -- nvmf/common.sh@297 -- # mlx=() 00:14:06.222 11:51:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:06.223 11:51:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.223 11:51:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:06.223 11:51:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:06.223 11:51:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:06.223 11:51:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:06.223 11:51:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:06.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:06.223 11:51:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:06.223 11:51:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:06.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:06.223 11:51:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:06.223 11:51:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:06.223 11:51:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.223 11:51:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:06.223 11:51:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.223 11:51:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:06.223 Found net devices under 0000:31:00.0: cvl_0_0 00:14:06.223 11:51:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.223 11:51:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:06.223 11:51:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.223 11:51:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:06.223 11:51:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.223 11:51:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:06.223 Found net devices under 0000:31:00.1: cvl_0_1 00:14:06.223 11:51:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.223 11:51:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:06.223 11:51:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:06.223 11:51:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:06.223 11:51:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:06.223 11:51:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.223 11:51:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.223 11:51:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.223 11:51:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:06.223 11:51:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.223 11:51:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.223 11:51:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:06.223 11:51:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.223 11:51:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.223 11:51:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:06.223 11:51:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:06.223 11:51:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.223 11:51:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.223 11:51:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.223 11:51:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.223 11:51:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:06.223 11:51:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.223 11:51:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.223 11:51:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.223 11:51:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:06.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:14:06.223 00:14:06.223 --- 10.0.0.2 ping statistics --- 00:14:06.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.223 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:14:06.223 11:51:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:14:06.485 00:14:06.485 --- 10.0.0.1 ping statistics --- 00:14:06.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.485 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:14:06.485 11:52:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.485 11:52:00 -- nvmf/common.sh@410 -- # return 0 00:14:06.485 11:52:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:06.485 11:52:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.485 11:52:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:06.485 11:52:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:06.485 11:52:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.485 11:52:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:06.485 11:52:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:06.485 11:52:00 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:06.485 11:52:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:06.485 11:52:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:06.485 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:14:06.485 11:52:00 -- nvmf/common.sh@469 -- # nvmfpid=1861391 00:14:06.485 11:52:00 -- nvmf/common.sh@470 -- # waitforlisten 1861391 00:14:06.485 11:52:00 -- common/autotest_common.sh@819 -- # '[' -z 1861391 ']' 00:14:06.485 11:52:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.485 11:52:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:06.485 11:52:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.485 11:52:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:06.485 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:14:06.485 11:52:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:06.485 [2024-06-10 11:52:00.076046] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:06.485 [2024-06-10 11:52:00.076100] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.485 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.485 [2024-06-10 11:52:00.161267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:06.485 [2024-06-10 11:52:00.252182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:06.485 [2024-06-10 11:52:00.252361] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.485 [2024-06-10 11:52:00.252372] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.485 [2024-06-10 11:52:00.252381] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.485 [2024-06-10 11:52:00.252602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.485 [2024-06-10 11:52:00.252837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.485 [2024-06-10 11:52:00.252838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.426 11:52:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:07.426 11:52:00 -- common/autotest_common.sh@852 -- # return 0 00:14:07.426 11:52:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:07.426 11:52:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:07.426 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:14:07.426 11:52:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.426 11:52:00 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:07.426 11:52:00 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:07.426 [2024-06-10 11:52:01.035076] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.426 11:52:01 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.687 11:52:01 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.687 [2024-06-10 11:52:01.364523] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.687 11:52:01 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:07.947 11:52:01 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:07.947 Malloc0 00:14:08.208 11:52:01 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:08.208 Delay0 00:14:08.208 11:52:01 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.471 11:52:02 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:08.471 NULL1 00:14:08.471 11:52:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:08.732 11:52:02 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:08.732 11:52:02 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1861772 00:14:08.732 11:52:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:08.732 11:52:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.732 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.674 Read completed with error (sct=0, sc=11) 00:14:09.935 11:52:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.935 11:52:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:09.935 11:52:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:10.196 true 00:14:10.196 11:52:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:10.196 11:52:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.137 11:52:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.137 11:52:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:11.137 11:52:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:11.397 true 00:14:11.397 11:52:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:11.397 11:52:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.397 11:52:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.658 11:52:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:11.658 11:52:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:11.658 true 00:14:11.919 11:52:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:11.919 11:52:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.919 11:52:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.179 11:52:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:12.179 11:52:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:12.179 true 00:14:12.179 11:52:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:12.179 11:52:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.440 11:52:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.701 11:52:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:12.701 11:52:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:12.701 true 00:14:12.701 11:52:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:12.701 11:52:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.962 11:52:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.962 11:52:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:12.962 11:52:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:13.222 true 00:14:13.222 11:52:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:13.222 11:52:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.163 11:52:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.423 11:52:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:14.423 11:52:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:14.423 true 00:14:14.423 11:52:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:14.423 11:52:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.683 11:52:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.683 11:52:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:14.683 11:52:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:14.944 true 00:14:14.944 11:52:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:14.944 11:52:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.204 11:52:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.204 11:52:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:15.204 11:52:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:15.465 true 00:14:15.465 11:52:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:15.465 11:52:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.725 11:52:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.725 11:52:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:15.725 11:52:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:15.986 true 00:14:15.986 11:52:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:15.986 11:52:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.986 11:52:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.246 11:52:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:16.246 11:52:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:16.506 true 00:14:16.506 11:52:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:16.506 11:52:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.448 11:52:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.448 11:52:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:17.448 11:52:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:17.708 true 00:14:17.708 11:52:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:17.708 11:52:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.649 11:52:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.649 11:52:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:18.649 11:52:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:18.909 true 00:14:18.909 11:52:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:18.909 11:52:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.909 11:52:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.169 11:52:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:19.169 11:52:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:19.169 true 00:14:19.430 11:52:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:19.430 11:52:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.430 11:52:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.690 11:52:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:19.690 11:52:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:19.690 true 00:14:19.690 11:52:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:19.690 11:52:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.950 11:52:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.210 11:52:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:20.210 11:52:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:20.210 true 00:14:20.210 11:52:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:20.210 11:52:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.470 11:52:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.470 11:52:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:20.470 11:52:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:20.730 true 00:14:20.730 11:52:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:20.730 11:52:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.991 11:52:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.991 11:52:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:20.991 11:52:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:21.252 true 00:14:21.252 11:52:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:21.252 11:52:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.513 11:52:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.513 11:52:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:21.513 11:52:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:21.773 true 00:14:21.773 11:52:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:21.773 11:52:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.773 11:52:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.034 11:52:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:22.034 11:52:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:22.294 true 00:14:22.294 11:52:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:22.294 11:52:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.294 11:52:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.556 11:52:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:22.556 11:52:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:22.556 true 00:14:22.817 11:52:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:22.817 11:52:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.758 11:52:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.758 11:52:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:23.758 11:52:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:23.758 true 00:14:24.020 11:52:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:24.020 11:52:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.020 11:52:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.281 11:52:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:24.281 11:52:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:24.281 true 00:14:24.281 11:52:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:24.281 11:52:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.541 11:52:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.802 11:52:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:24.802 11:52:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:24.802 true 00:14:24.802 11:52:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:24.802 11:52:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.745 11:52:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.006 11:52:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:26.006 11:52:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:26.006 true 00:14:26.006 11:52:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:26.006 11:52:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.267 11:52:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.267 11:52:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:26.267 11:52:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:26.528 true 00:14:26.528 11:52:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:26.528 11:52:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.788 11:52:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.788 11:52:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:26.788 11:52:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:27.059 true 00:14:27.059 11:52:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:27.060 11:52:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.060 11:52:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.374 11:52:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:27.374 11:52:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:27.374 true 00:14:27.374 11:52:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:27.374 11:52:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.650 11:52:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.911 11:52:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:27.912 11:52:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:27.912 true 00:14:27.912 11:52:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:27.912 11:52:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.854 11:52:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.116 11:52:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:29.116 11:52:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:29.116 true 00:14:29.116 11:52:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:29.116 11:52:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.377 11:52:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.377 11:52:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:29.377 11:52:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:29.638 true 00:14:29.638 11:52:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:29.638 11:52:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.899 11:52:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.899 11:52:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:29.899 11:52:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:30.160 true 00:14:30.160 11:52:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:30.160 11:52:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.421 11:52:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.421 11:52:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:30.421 11:52:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:30.682 true 00:14:30.682 11:52:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:30.682 11:52:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.682 11:52:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.942 11:52:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:30.942 11:52:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:31.203 true 00:14:31.203 11:52:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:31.203 11:52:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.145 11:52:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.145 11:52:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:32.145 11:52:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:32.406 true 00:14:32.406 11:52:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:32.406 11:52:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.348 11:52:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.348 11:52:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:33.348 11:52:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:33.609 true 00:14:33.609 11:52:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:33.609 11:52:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.609 11:52:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.870 11:52:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:33.870 11:52:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:34.131 true 00:14:34.131 11:52:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:34.131 11:52:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.131 11:52:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.391 11:52:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:34.392 11:52:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:34.653 true 00:14:34.653 11:52:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:34.653 11:52:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.653 11:52:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.914 11:52:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:34.914 11:52:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:34.914 true 00:14:34.914 11:52:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:34.914 11:52:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.174 11:52:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.435 11:52:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:35.435 11:52:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:35.435 true 00:14:35.435 11:52:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:35.435 11:52:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.377 11:52:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.639 11:52:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:36.639 11:52:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:36.639 true 00:14:36.639 11:52:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:36.639 11:52:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.899 11:52:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.160 11:52:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:37.160 11:52:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:37.160 true 00:14:37.160 11:52:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:37.160 11:52:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.421 11:52:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.421 11:52:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:37.421 11:52:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:37.682 true 00:14:37.682 11:52:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:37.682 11:52:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.942 11:52:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.942 11:52:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:37.942 11:52:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:38.203 true 00:14:38.203 11:52:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:38.203 11:52:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.463 11:52:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.463 11:52:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:38.463 11:52:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:38.724 true 00:14:38.724 11:52:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:38.724 11:52:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.724 11:52:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.984 Initializing NVMe Controllers 00:14:38.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:38.984 Controller IO queue size 128, less than required. 00:14:38.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:38.984 Controller IO queue size 128, less than required. 00:14:38.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:38.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:38.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:38.984 Initialization complete. Launching workers. 00:14:38.984 ======================================================== 00:14:38.984 Latency(us) 00:14:38.984 Device Information : IOPS MiB/s Average min max 00:14:38.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 682.18 0.33 63618.79 2281.63 1160786.09 00:14:38.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11188.26 5.46 11440.15 1987.36 493832.75 00:14:38.984 ======================================================== 00:14:38.984 Total : 11870.44 5.80 14438.78 1987.36 1160786.09 00:14:38.984 00:14:38.984 11:52:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:38.984 11:52:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:39.245 true 00:14:39.245 11:52:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1861772 00:14:39.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1861772) - No such process 00:14:39.245 11:52:32 -- target/ns_hotplug_stress.sh@53 -- # wait 1861772 00:14:39.245 11:52:32 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.245 11:52:32 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:39.506 11:52:33 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:39.506 11:52:33 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:39.506 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:39.506 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:39.506 11:52:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:39.506 null0 00:14:39.506 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:39.506 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:39.506 11:52:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:39.767 null1 00:14:39.767 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:39.767 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:39.767 11:52:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:39.767 null2 00:14:40.027 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:40.027 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:40.027 11:52:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:40.027 null3 00:14:40.027 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:40.027 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:40.027 11:52:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:40.288 null4 00:14:40.288 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:40.288 11:52:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:40.288 11:52:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:40.288 null5 00:14:40.288 11:52:34 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:40.288 11:52:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:40.288 11:52:34 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:40.548 null6 00:14:40.548 11:52:34 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:40.548 11:52:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:40.549 11:52:34 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:40.810 null7 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@66 -- # wait 1868331 1868332 1868334 1868336 1868338 1868340 1868342 1868344 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:40.810 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:41.072 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.334 11:52:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.334 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:41.335 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.335 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.335 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:41.335 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.335 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.335 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.596 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:41.857 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.857 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.857 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:41.857 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.857 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.857 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:41.857 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.857 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:41.858 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:42.119 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.381 11:52:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.381 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.642 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:42.903 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:43.165 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.425 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.425 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.425 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:43.425 11:52:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:43.425 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.425 11:52:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.425 11:52:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:43.425 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:43.686 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.947 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:44.208 11:52:37 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:44.208 11:52:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:44.208 11:52:37 -- nvmf/common.sh@116 -- # sync 00:14:44.208 11:52:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:44.208 11:52:37 -- nvmf/common.sh@119 -- # set +e 00:14:44.208 11:52:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:44.208 11:52:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:44.208 rmmod nvme_tcp 00:14:44.208 rmmod nvme_fabrics 00:14:44.208 rmmod nvme_keyring 00:14:44.208 11:52:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:44.208 11:52:37 -- nvmf/common.sh@123 -- # set -e 00:14:44.208 11:52:37 -- nvmf/common.sh@124 -- # return 0 00:14:44.208 11:52:37 -- nvmf/common.sh@477 -- # '[' -n 1861391 ']' 00:14:44.208 11:52:37 -- nvmf/common.sh@478 -- # killprocess 1861391 00:14:44.208 11:52:37 -- common/autotest_common.sh@926 -- # '[' -z 1861391 ']' 00:14:44.208 11:52:37 -- common/autotest_common.sh@930 -- # kill -0 1861391 00:14:44.208 11:52:37 -- common/autotest_common.sh@931 -- # uname 00:14:44.208 11:52:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:44.208 11:52:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1861391 00:14:44.469 11:52:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:44.469 11:52:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:44.469 11:52:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1861391' 00:14:44.469 killing process with pid 1861391 00:14:44.469 11:52:38 -- common/autotest_common.sh@945 -- # kill 1861391 00:14:44.469 11:52:38 -- common/autotest_common.sh@950 -- # wait 1861391 00:14:44.469 11:52:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:44.469 11:52:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:44.469 11:52:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:44.469 11:52:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.469 11:52:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:44.469 11:52:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.469 11:52:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.469 11:52:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.020 11:52:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:47.020 00:14:47.020 real 0m47.138s 00:14:47.020 user 3m7.502s 00:14:47.020 sys 0m14.616s 00:14:47.020 11:52:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.020 11:52:40 -- common/autotest_common.sh@10 -- # set +x 00:14:47.020 ************************************ 00:14:47.020 END TEST nvmf_ns_hotplug_stress 00:14:47.020 ************************************ 00:14:47.020 11:52:40 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:47.020 11:52:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:47.020 11:52:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:47.020 11:52:40 -- common/autotest_common.sh@10 -- # set +x 00:14:47.020 ************************************ 00:14:47.020 START TEST nvmf_connect_stress 00:14:47.020 ************************************ 00:14:47.021 11:52:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:47.021 * Looking for test storage... 00:14:47.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.021 11:52:40 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.021 11:52:40 -- nvmf/common.sh@7 -- # uname -s 00:14:47.021 11:52:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.021 11:52:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.021 11:52:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.021 11:52:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.021 11:52:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.021 11:52:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.021 11:52:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.021 11:52:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.021 11:52:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.021 11:52:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.021 11:52:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:47.021 11:52:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:47.021 11:52:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.021 11:52:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.021 11:52:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.021 11:52:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.021 11:52:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.021 11:52:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.021 11:52:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.021 11:52:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.021 11:52:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.021 11:52:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.021 11:52:40 -- paths/export.sh@5 -- # export PATH 00:14:47.021 11:52:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.021 11:52:40 -- nvmf/common.sh@46 -- # : 0 00:14:47.021 11:52:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:47.021 11:52:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:47.021 11:52:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:47.021 11:52:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.021 11:52:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.021 11:52:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:47.021 11:52:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:47.021 11:52:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:47.021 11:52:40 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:47.021 11:52:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:47.021 11:52:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.021 11:52:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:47.021 11:52:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:47.021 11:52:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:47.021 11:52:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.021 11:52:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.021 11:52:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.021 11:52:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:47.021 11:52:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:47.021 11:52:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:47.021 11:52:40 -- common/autotest_common.sh@10 -- # set +x 00:14:53.634 11:52:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:53.634 11:52:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:53.634 11:52:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:53.634 11:52:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:53.634 11:52:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:53.634 11:52:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:53.634 11:52:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:53.634 11:52:47 -- nvmf/common.sh@294 -- # net_devs=() 00:14:53.634 11:52:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:53.634 11:52:47 -- nvmf/common.sh@295 -- # e810=() 00:14:53.634 11:52:47 -- nvmf/common.sh@295 -- # local -ga e810 00:14:53.634 11:52:47 -- nvmf/common.sh@296 -- # x722=() 00:14:53.634 11:52:47 -- nvmf/common.sh@296 -- # local -ga x722 00:14:53.634 11:52:47 -- nvmf/common.sh@297 -- # mlx=() 00:14:53.634 11:52:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:53.634 11:52:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.634 11:52:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:53.634 11:52:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:53.634 11:52:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:53.634 11:52:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:53.634 11:52:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:53.634 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:53.634 11:52:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:53.634 11:52:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:53.634 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:53.634 11:52:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:53.634 11:52:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:53.634 11:52:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.634 11:52:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:53.634 11:52:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.634 11:52:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:53.634 Found net devices under 0000:31:00.0: cvl_0_0 00:14:53.634 11:52:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.634 11:52:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:53.634 11:52:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.634 11:52:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:53.634 11:52:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.634 11:52:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:53.634 Found net devices under 0000:31:00.1: cvl_0_1 00:14:53.634 11:52:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.634 11:52:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:53.634 11:52:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:53.634 11:52:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:53.634 11:52:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:53.634 11:52:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.634 11:52:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.634 11:52:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.634 11:52:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:53.634 11:52:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.634 11:52:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.634 11:52:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:53.634 11:52:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.634 11:52:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.634 11:52:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:53.634 11:52:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:53.634 11:52:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.634 11:52:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.896 11:52:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.896 11:52:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.896 11:52:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:53.896 11:52:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.896 11:52:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.896 11:52:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.896 11:52:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:53.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:14:53.896 00:14:53.896 --- 10.0.0.2 ping statistics --- 00:14:53.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.896 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:14:53.896 11:52:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:14:53.896 00:14:53.896 --- 10.0.0.1 ping statistics --- 00:14:53.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.896 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:14:53.896 11:52:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.896 11:52:47 -- nvmf/common.sh@410 -- # return 0 00:14:53.896 11:52:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:53.896 11:52:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.896 11:52:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:53.896 11:52:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:53.896 11:52:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.896 11:52:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:53.896 11:52:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:53.896 11:52:47 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:53.896 11:52:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:53.896 11:52:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:53.896 11:52:47 -- common/autotest_common.sh@10 -- # set +x 00:14:54.157 11:52:47 -- nvmf/common.sh@469 -- # nvmfpid=1873574 00:14:54.157 11:52:47 -- nvmf/common.sh@470 -- # waitforlisten 1873574 00:14:54.157 11:52:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:54.157 11:52:47 -- common/autotest_common.sh@819 -- # '[' -z 1873574 ']' 00:14:54.157 11:52:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.157 11:52:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:54.157 11:52:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.157 11:52:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:54.157 11:52:47 -- common/autotest_common.sh@10 -- # set +x 00:14:54.157 [2024-06-10 11:52:47.718519] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:54.157 [2024-06-10 11:52:47.718579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.157 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.157 [2024-06-10 11:52:47.805886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:54.157 [2024-06-10 11:52:47.896826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:54.157 [2024-06-10 11:52:47.896988] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.157 [2024-06-10 11:52:47.897000] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.157 [2024-06-10 11:52:47.897007] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.157 [2024-06-10 11:52:47.897141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.157 [2024-06-10 11:52:47.897306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.157 [2024-06-10 11:52:47.897349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.099 11:52:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:55.099 11:52:48 -- common/autotest_common.sh@852 -- # return 0 00:14:55.099 11:52:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:55.099 11:52:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:55.099 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:55.099 11:52:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.099 11:52:48 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:55.099 11:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.099 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:55.099 [2024-06-10 11:52:48.547252] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.099 11:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.099 11:52:48 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:55.099 11:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.099 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:55.099 11:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.099 11:52:48 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.099 11:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.099 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:55.099 [2024-06-10 11:52:48.571688] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.099 11:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.099 11:52:48 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:55.099 11:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.099 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:55.099 NULL1 00:14:55.099 11:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.099 11:52:48 -- target/connect_stress.sh@21 -- # PERF_PID=1873612 00:14:55.099 11:52:48 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:55.099 11:52:48 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:55.099 11:52:48 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:55.099 11:52:48 -- target/connect_stress.sh@28 -- # cat 00:14:55.099 11:52:48 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:55.099 11:52:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.099 11:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.099 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:55.359 11:52:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.359 11:52:49 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:55.360 11:52:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.360 11:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.360 11:52:49 -- common/autotest_common.sh@10 -- # set +x 00:14:55.621 11:52:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.621 11:52:49 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:55.621 11:52:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.621 11:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.621 11:52:49 -- common/autotest_common.sh@10 -- # set +x 00:14:56.196 11:52:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.196 11:52:49 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:56.196 11:52:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.196 11:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.196 11:52:49 -- common/autotest_common.sh@10 -- # set +x 00:14:56.492 11:52:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.492 11:52:49 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:56.492 11:52:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.492 11:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.492 11:52:49 -- common/autotest_common.sh@10 -- # set +x 00:14:56.761 11:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.761 11:52:50 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:56.761 11:52:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.761 11:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.761 11:52:50 -- common/autotest_common.sh@10 -- # set +x 00:14:57.022 11:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.022 11:52:50 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:57.022 11:52:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.022 11:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.022 11:52:50 -- common/autotest_common.sh@10 -- # set +x 00:14:57.283 11:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.283 11:52:50 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:57.283 11:52:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.283 11:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.283 11:52:50 -- common/autotest_common.sh@10 -- # set +x 00:14:57.544 11:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.544 11:52:51 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:57.544 11:52:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.544 11:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.544 11:52:51 -- common/autotest_common.sh@10 -- # set +x 00:14:58.116 11:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.116 11:52:51 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:58.116 11:52:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.116 11:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.116 11:52:51 -- common/autotest_common.sh@10 -- # set +x 00:14:58.377 11:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.377 11:52:51 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:58.377 11:52:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.377 11:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.377 11:52:51 -- common/autotest_common.sh@10 -- # set +x 00:14:58.637 11:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.637 11:52:52 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:58.637 11:52:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.637 11:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.637 11:52:52 -- common/autotest_common.sh@10 -- # set +x 00:14:58.898 11:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.898 11:52:52 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:58.898 11:52:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.898 11:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.898 11:52:52 -- common/autotest_common.sh@10 -- # set +x 00:14:59.158 11:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.158 11:52:52 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:59.159 11:52:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.159 11:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.159 11:52:52 -- common/autotest_common.sh@10 -- # set +x 00:14:59.729 11:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.729 11:52:53 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:59.729 11:52:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.729 11:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.729 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:14:59.990 11:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.990 11:52:53 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:14:59.990 11:52:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.990 11:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.990 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:15:00.251 11:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.251 11:52:53 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:00.251 11:52:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.251 11:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.251 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:15:00.512 11:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.512 11:52:54 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:00.512 11:52:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.512 11:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.512 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:15:00.773 11:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.773 11:52:54 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:00.773 11:52:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.773 11:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.773 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:15:01.345 11:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.345 11:52:54 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:01.345 11:52:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.345 11:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.345 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:15:01.606 11:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.606 11:52:55 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:01.606 11:52:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.606 11:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.606 11:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:01.868 11:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.868 11:52:55 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:01.868 11:52:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.868 11:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.868 11:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:02.129 11:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.129 11:52:55 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:02.129 11:52:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.129 11:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.129 11:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:02.701 11:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.701 11:52:56 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:02.701 11:52:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.701 11:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.701 11:52:56 -- common/autotest_common.sh@10 -- # set +x 00:15:02.962 11:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.962 11:52:56 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:02.962 11:52:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.962 11:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.962 11:52:56 -- common/autotest_common.sh@10 -- # set +x 00:15:03.223 11:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.223 11:52:56 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:03.223 11:52:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.223 11:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.223 11:52:56 -- common/autotest_common.sh@10 -- # set +x 00:15:03.484 11:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.484 11:52:57 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:03.484 11:52:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.484 11:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.484 11:52:57 -- common/autotest_common.sh@10 -- # set +x 00:15:03.744 11:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.744 11:52:57 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:03.744 11:52:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.744 11:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.744 11:52:57 -- common/autotest_common.sh@10 -- # set +x 00:15:04.316 11:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.316 11:52:57 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:04.316 11:52:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.316 11:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.316 11:52:57 -- common/autotest_common.sh@10 -- # set +x 00:15:04.596 11:52:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.596 11:52:58 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:04.596 11:52:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.596 11:52:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.596 11:52:58 -- common/autotest_common.sh@10 -- # set +x 00:15:04.856 11:52:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.856 11:52:58 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:04.856 11:52:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.856 11:52:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.856 11:52:58 -- common/autotest_common.sh@10 -- # set +x 00:15:05.117 11:52:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.117 11:52:58 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:05.117 11:52:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.117 11:52:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.117 11:52:58 -- common/autotest_common.sh@10 -- # set +x 00:15:05.117 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.378 11:52:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.378 11:52:59 -- target/connect_stress.sh@34 -- # kill -0 1873612 00:15:05.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1873612) - No such process 00:15:05.378 11:52:59 -- target/connect_stress.sh@38 -- # wait 1873612 00:15:05.378 11:52:59 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:05.378 11:52:59 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:05.378 11:52:59 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:05.378 11:52:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:05.378 11:52:59 -- nvmf/common.sh@116 -- # sync 00:15:05.378 11:52:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:05.378 11:52:59 -- nvmf/common.sh@119 -- # set +e 00:15:05.378 11:52:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:05.378 11:52:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:05.378 rmmod nvme_tcp 00:15:05.378 rmmod nvme_fabrics 00:15:05.639 rmmod nvme_keyring 00:15:05.639 11:52:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:05.639 11:52:59 -- nvmf/common.sh@123 -- # set -e 00:15:05.639 11:52:59 -- nvmf/common.sh@124 -- # return 0 00:15:05.639 11:52:59 -- nvmf/common.sh@477 -- # '[' -n 1873574 ']' 00:15:05.639 11:52:59 -- nvmf/common.sh@478 -- # killprocess 1873574 00:15:05.639 11:52:59 -- common/autotest_common.sh@926 -- # '[' -z 1873574 ']' 00:15:05.639 11:52:59 -- common/autotest_common.sh@930 -- # kill -0 1873574 00:15:05.639 11:52:59 -- common/autotest_common.sh@931 -- # uname 00:15:05.639 11:52:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.639 11:52:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1873574 00:15:05.639 11:52:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:05.639 11:52:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:05.639 11:52:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1873574' 00:15:05.639 killing process with pid 1873574 00:15:05.639 11:52:59 -- common/autotest_common.sh@945 -- # kill 1873574 00:15:05.639 11:52:59 -- common/autotest_common.sh@950 -- # wait 1873574 00:15:05.639 11:52:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.639 11:52:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.639 11:52:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.639 11:52:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.639 11:52:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.639 11:52:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.639 11:52:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.639 11:52:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.185 11:53:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:08.185 00:15:08.185 real 0m21.176s 00:15:08.185 user 0m43.045s 00:15:08.185 sys 0m8.775s 00:15:08.185 11:53:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.185 11:53:01 -- common/autotest_common.sh@10 -- # set +x 00:15:08.185 ************************************ 00:15:08.185 END TEST nvmf_connect_stress 00:15:08.185 ************************************ 00:15:08.185 11:53:01 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:08.185 11:53:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:08.185 11:53:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:08.185 11:53:01 -- common/autotest_common.sh@10 -- # set +x 00:15:08.185 ************************************ 00:15:08.185 START TEST nvmf_fused_ordering 00:15:08.185 ************************************ 00:15:08.185 11:53:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:08.185 * Looking for test storage... 00:15:08.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.185 11:53:01 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.185 11:53:01 -- nvmf/common.sh@7 -- # uname -s 00:15:08.185 11:53:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.185 11:53:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.185 11:53:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.185 11:53:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.185 11:53:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.185 11:53:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.185 11:53:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.185 11:53:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.185 11:53:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.185 11:53:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.185 11:53:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:08.185 11:53:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:08.185 11:53:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.185 11:53:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.185 11:53:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.185 11:53:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.185 11:53:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.185 11:53:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.185 11:53:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.185 11:53:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.185 11:53:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.185 11:53:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.185 11:53:01 -- paths/export.sh@5 -- # export PATH 00:15:08.185 11:53:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.185 11:53:01 -- nvmf/common.sh@46 -- # : 0 00:15:08.185 11:53:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:08.185 11:53:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:08.185 11:53:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:08.185 11:53:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.185 11:53:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.185 11:53:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:08.185 11:53:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:08.185 11:53:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:08.185 11:53:01 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:08.185 11:53:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:08.185 11:53:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.185 11:53:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:08.186 11:53:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:08.186 11:53:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:08.186 11:53:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.186 11:53:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.186 11:53:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.186 11:53:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:08.186 11:53:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:08.186 11:53:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:08.186 11:53:01 -- common/autotest_common.sh@10 -- # set +x 00:15:14.774 11:53:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:14.774 11:53:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:14.774 11:53:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:14.774 11:53:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:14.774 11:53:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:14.774 11:53:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:14.774 11:53:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:14.774 11:53:08 -- nvmf/common.sh@294 -- # net_devs=() 00:15:14.774 11:53:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:14.774 11:53:08 -- nvmf/common.sh@295 -- # e810=() 00:15:14.774 11:53:08 -- nvmf/common.sh@295 -- # local -ga e810 00:15:14.774 11:53:08 -- nvmf/common.sh@296 -- # x722=() 00:15:14.774 11:53:08 -- nvmf/common.sh@296 -- # local -ga x722 00:15:14.774 11:53:08 -- nvmf/common.sh@297 -- # mlx=() 00:15:14.774 11:53:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:14.774 11:53:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.774 11:53:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.774 11:53:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.774 11:53:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.774 11:53:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.774 11:53:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.774 11:53:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.774 11:53:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.774 11:53:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.775 11:53:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.775 11:53:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.775 11:53:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:14.775 11:53:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:14.775 11:53:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:14.775 11:53:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:14.775 11:53:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:14.775 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:14.775 11:53:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:14.775 11:53:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:14.775 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:14.775 11:53:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:14.775 11:53:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:14.775 11:53:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.775 11:53:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:14.775 11:53:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.775 11:53:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:14.775 Found net devices under 0000:31:00.0: cvl_0_0 00:15:14.775 11:53:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.775 11:53:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:14.775 11:53:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.775 11:53:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:14.775 11:53:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.775 11:53:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:14.775 Found net devices under 0000:31:00.1: cvl_0_1 00:15:14.775 11:53:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.775 11:53:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:14.775 11:53:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:14.775 11:53:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:14.775 11:53:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:14.775 11:53:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.775 11:53:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.775 11:53:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.775 11:53:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:14.775 11:53:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.775 11:53:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.775 11:53:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:14.775 11:53:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.775 11:53:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.775 11:53:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:14.775 11:53:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:14.775 11:53:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.775 11:53:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.036 11:53:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.036 11:53:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.036 11:53:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:15.036 11:53:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.036 11:53:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.036 11:53:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.036 11:53:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:15.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:15:15.036 00:15:15.036 --- 10.0.0.2 ping statistics --- 00:15:15.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.036 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:15:15.036 11:53:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:15:15.036 00:15:15.036 --- 10.0.0.1 ping statistics --- 00:15:15.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.036 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:15:15.036 11:53:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.036 11:53:08 -- nvmf/common.sh@410 -- # return 0 00:15:15.036 11:53:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:15.036 11:53:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.036 11:53:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:15.036 11:53:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:15.036 11:53:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.036 11:53:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:15.036 11:53:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:15.298 11:53:08 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:15.298 11:53:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:15.298 11:53:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:15.298 11:53:08 -- common/autotest_common.sh@10 -- # set +x 00:15:15.298 11:53:08 -- nvmf/common.sh@469 -- # nvmfpid=1880062 00:15:15.298 11:53:08 -- nvmf/common.sh@470 -- # waitforlisten 1880062 00:15:15.298 11:53:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:15.298 11:53:08 -- common/autotest_common.sh@819 -- # '[' -z 1880062 ']' 00:15:15.298 11:53:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.298 11:53:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:15.298 11:53:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.298 11:53:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:15.298 11:53:08 -- common/autotest_common.sh@10 -- # set +x 00:15:15.298 [2024-06-10 11:53:08.888809] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:15.298 [2024-06-10 11:53:08.888871] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.298 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.298 [2024-06-10 11:53:08.977930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.298 [2024-06-10 11:53:09.069032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:15.298 [2024-06-10 11:53:09.069183] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.298 [2024-06-10 11:53:09.069192] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.298 [2024-06-10 11:53:09.069200] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.298 [2024-06-10 11:53:09.069235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.240 11:53:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:16.241 11:53:09 -- common/autotest_common.sh@852 -- # return 0 00:15:16.241 11:53:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:16.241 11:53:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:16.241 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.241 11:53:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.241 11:53:09 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:16.241 11:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.241 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.241 [2024-06-10 11:53:09.732412] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.241 11:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.241 11:53:09 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:16.241 11:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.241 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.241 11:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.241 11:53:09 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.241 11:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.241 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.241 [2024-06-10 11:53:09.748544] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.241 11:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.241 11:53:09 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:16.241 11:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.241 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.241 NULL1 00:15:16.241 11:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.241 11:53:09 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:16.241 11:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.241 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.241 11:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.241 11:53:09 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:16.241 11:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.241 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.241 11:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.241 11:53:09 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:16.241 [2024-06-10 11:53:09.802707] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:16.241 [2024-06-10 11:53:09.802747] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1880290 ] 00:15:16.241 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.501 Attached to nqn.2016-06.io.spdk:cnode1 00:15:16.501 Namespace ID: 1 size: 1GB 00:15:16.501 fused_ordering(0) 00:15:16.501 fused_ordering(1) 00:15:16.501 fused_ordering(2) 00:15:16.501 fused_ordering(3) 00:15:16.501 fused_ordering(4) 00:15:16.501 fused_ordering(5) 00:15:16.501 fused_ordering(6) 00:15:16.501 fused_ordering(7) 00:15:16.501 fused_ordering(8) 00:15:16.501 fused_ordering(9) 00:15:16.501 fused_ordering(10) 00:15:16.501 fused_ordering(11) 00:15:16.501 fused_ordering(12) 00:15:16.501 fused_ordering(13) 00:15:16.501 fused_ordering(14) 00:15:16.501 fused_ordering(15) 00:15:16.501 fused_ordering(16) 00:15:16.501 fused_ordering(17) 00:15:16.501 fused_ordering(18) 00:15:16.501 fused_ordering(19) 00:15:16.501 fused_ordering(20) 00:15:16.501 fused_ordering(21) 00:15:16.502 fused_ordering(22) 00:15:16.502 fused_ordering(23) 00:15:16.502 fused_ordering(24) 00:15:16.502 fused_ordering(25) 00:15:16.502 fused_ordering(26) 00:15:16.502 fused_ordering(27) 00:15:16.502 fused_ordering(28) 00:15:16.502 fused_ordering(29) 00:15:16.502 fused_ordering(30) 00:15:16.502 fused_ordering(31) 00:15:16.502 fused_ordering(32) 00:15:16.502 fused_ordering(33) 00:15:16.502 fused_ordering(34) 00:15:16.502 fused_ordering(35) 00:15:16.502 fused_ordering(36) 00:15:16.502 fused_ordering(37) 00:15:16.502 fused_ordering(38) 00:15:16.502 fused_ordering(39) 00:15:16.502 fused_ordering(40) 00:15:16.502 fused_ordering(41) 00:15:16.502 fused_ordering(42) 00:15:16.502 fused_ordering(43) 00:15:16.502 fused_ordering(44) 00:15:16.502 fused_ordering(45) 00:15:16.502 fused_ordering(46) 00:15:16.502 fused_ordering(47) 00:15:16.502 fused_ordering(48) 00:15:16.502 fused_ordering(49) 00:15:16.502 fused_ordering(50) 00:15:16.502 fused_ordering(51) 00:15:16.502 fused_ordering(52) 00:15:16.502 fused_ordering(53) 00:15:16.502 fused_ordering(54) 00:15:16.502 fused_ordering(55) 00:15:16.502 fused_ordering(56) 00:15:16.502 fused_ordering(57) 00:15:16.502 fused_ordering(58) 00:15:16.502 fused_ordering(59) 00:15:16.502 fused_ordering(60) 00:15:16.502 fused_ordering(61) 00:15:16.502 fused_ordering(62) 00:15:16.502 fused_ordering(63) 00:15:16.502 fused_ordering(64) 00:15:16.502 fused_ordering(65) 00:15:16.502 fused_ordering(66) 00:15:16.502 fused_ordering(67) 00:15:16.502 fused_ordering(68) 00:15:16.502 fused_ordering(69) 00:15:16.502 fused_ordering(70) 00:15:16.502 fused_ordering(71) 00:15:16.502 fused_ordering(72) 00:15:16.502 fused_ordering(73) 00:15:16.502 fused_ordering(74) 00:15:16.502 fused_ordering(75) 00:15:16.502 fused_ordering(76) 00:15:16.502 fused_ordering(77) 00:15:16.502 fused_ordering(78) 00:15:16.502 fused_ordering(79) 00:15:16.502 fused_ordering(80) 00:15:16.502 fused_ordering(81) 00:15:16.502 fused_ordering(82) 00:15:16.502 fused_ordering(83) 00:15:16.502 fused_ordering(84) 00:15:16.502 fused_ordering(85) 00:15:16.502 fused_ordering(86) 00:15:16.502 fused_ordering(87) 00:15:16.502 fused_ordering(88) 00:15:16.502 fused_ordering(89) 00:15:16.502 fused_ordering(90) 00:15:16.502 fused_ordering(91) 00:15:16.502 fused_ordering(92) 00:15:16.502 fused_ordering(93) 00:15:16.502 fused_ordering(94) 00:15:16.502 fused_ordering(95) 00:15:16.502 fused_ordering(96) 00:15:16.502 fused_ordering(97) 00:15:16.502 fused_ordering(98) 00:15:16.502 fused_ordering(99) 00:15:16.502 fused_ordering(100) 00:15:16.502 fused_ordering(101) 00:15:16.502 fused_ordering(102) 00:15:16.502 fused_ordering(103) 00:15:16.502 fused_ordering(104) 00:15:16.502 fused_ordering(105) 00:15:16.502 fused_ordering(106) 00:15:16.502 fused_ordering(107) 00:15:16.502 fused_ordering(108) 00:15:16.502 fused_ordering(109) 00:15:16.502 fused_ordering(110) 00:15:16.502 fused_ordering(111) 00:15:16.502 fused_ordering(112) 00:15:16.502 fused_ordering(113) 00:15:16.502 fused_ordering(114) 00:15:16.502 fused_ordering(115) 00:15:16.502 fused_ordering(116) 00:15:16.502 fused_ordering(117) 00:15:16.502 fused_ordering(118) 00:15:16.502 fused_ordering(119) 00:15:16.502 fused_ordering(120) 00:15:16.502 fused_ordering(121) 00:15:16.502 fused_ordering(122) 00:15:16.502 fused_ordering(123) 00:15:16.502 fused_ordering(124) 00:15:16.502 fused_ordering(125) 00:15:16.502 fused_ordering(126) 00:15:16.502 fused_ordering(127) 00:15:16.502 fused_ordering(128) 00:15:16.502 fused_ordering(129) 00:15:16.502 fused_ordering(130) 00:15:16.502 fused_ordering(131) 00:15:16.502 fused_ordering(132) 00:15:16.502 fused_ordering(133) 00:15:16.502 fused_ordering(134) 00:15:16.502 fused_ordering(135) 00:15:16.502 fused_ordering(136) 00:15:16.502 fused_ordering(137) 00:15:16.502 fused_ordering(138) 00:15:16.502 fused_ordering(139) 00:15:16.502 fused_ordering(140) 00:15:16.502 fused_ordering(141) 00:15:16.502 fused_ordering(142) 00:15:16.502 fused_ordering(143) 00:15:16.502 fused_ordering(144) 00:15:16.502 fused_ordering(145) 00:15:16.502 fused_ordering(146) 00:15:16.502 fused_ordering(147) 00:15:16.502 fused_ordering(148) 00:15:16.502 fused_ordering(149) 00:15:16.502 fused_ordering(150) 00:15:16.502 fused_ordering(151) 00:15:16.502 fused_ordering(152) 00:15:16.502 fused_ordering(153) 00:15:16.502 fused_ordering(154) 00:15:16.502 fused_ordering(155) 00:15:16.502 fused_ordering(156) 00:15:16.502 fused_ordering(157) 00:15:16.502 fused_ordering(158) 00:15:16.502 fused_ordering(159) 00:15:16.502 fused_ordering(160) 00:15:16.502 fused_ordering(161) 00:15:16.502 fused_ordering(162) 00:15:16.502 fused_ordering(163) 00:15:16.502 fused_ordering(164) 00:15:16.502 fused_ordering(165) 00:15:16.502 fused_ordering(166) 00:15:16.502 fused_ordering(167) 00:15:16.502 fused_ordering(168) 00:15:16.502 fused_ordering(169) 00:15:16.502 fused_ordering(170) 00:15:16.502 fused_ordering(171) 00:15:16.502 fused_ordering(172) 00:15:16.502 fused_ordering(173) 00:15:16.502 fused_ordering(174) 00:15:16.502 fused_ordering(175) 00:15:16.502 fused_ordering(176) 00:15:16.502 fused_ordering(177) 00:15:16.502 fused_ordering(178) 00:15:16.502 fused_ordering(179) 00:15:16.502 fused_ordering(180) 00:15:16.502 fused_ordering(181) 00:15:16.502 fused_ordering(182) 00:15:16.502 fused_ordering(183) 00:15:16.502 fused_ordering(184) 00:15:16.502 fused_ordering(185) 00:15:16.502 fused_ordering(186) 00:15:16.502 fused_ordering(187) 00:15:16.502 fused_ordering(188) 00:15:16.502 fused_ordering(189) 00:15:16.502 fused_ordering(190) 00:15:16.502 fused_ordering(191) 00:15:16.502 fused_ordering(192) 00:15:16.502 fused_ordering(193) 00:15:16.502 fused_ordering(194) 00:15:16.502 fused_ordering(195) 00:15:16.502 fused_ordering(196) 00:15:16.502 fused_ordering(197) 00:15:16.502 fused_ordering(198) 00:15:16.502 fused_ordering(199) 00:15:16.502 fused_ordering(200) 00:15:16.502 fused_ordering(201) 00:15:16.502 fused_ordering(202) 00:15:16.502 fused_ordering(203) 00:15:16.502 fused_ordering(204) 00:15:16.502 fused_ordering(205) 00:15:17.073 fused_ordering(206) 00:15:17.073 fused_ordering(207) 00:15:17.073 fused_ordering(208) 00:15:17.073 fused_ordering(209) 00:15:17.073 fused_ordering(210) 00:15:17.073 fused_ordering(211) 00:15:17.073 fused_ordering(212) 00:15:17.073 fused_ordering(213) 00:15:17.073 fused_ordering(214) 00:15:17.073 fused_ordering(215) 00:15:17.073 fused_ordering(216) 00:15:17.073 fused_ordering(217) 00:15:17.073 fused_ordering(218) 00:15:17.073 fused_ordering(219) 00:15:17.073 fused_ordering(220) 00:15:17.073 fused_ordering(221) 00:15:17.073 fused_ordering(222) 00:15:17.073 fused_ordering(223) 00:15:17.073 fused_ordering(224) 00:15:17.073 fused_ordering(225) 00:15:17.073 fused_ordering(226) 00:15:17.073 fused_ordering(227) 00:15:17.073 fused_ordering(228) 00:15:17.073 fused_ordering(229) 00:15:17.073 fused_ordering(230) 00:15:17.073 fused_ordering(231) 00:15:17.073 fused_ordering(232) 00:15:17.073 fused_ordering(233) 00:15:17.073 fused_ordering(234) 00:15:17.073 fused_ordering(235) 00:15:17.073 fused_ordering(236) 00:15:17.073 fused_ordering(237) 00:15:17.073 fused_ordering(238) 00:15:17.073 fused_ordering(239) 00:15:17.073 fused_ordering(240) 00:15:17.073 fused_ordering(241) 00:15:17.073 fused_ordering(242) 00:15:17.073 fused_ordering(243) 00:15:17.073 fused_ordering(244) 00:15:17.073 fused_ordering(245) 00:15:17.073 fused_ordering(246) 00:15:17.073 fused_ordering(247) 00:15:17.073 fused_ordering(248) 00:15:17.073 fused_ordering(249) 00:15:17.073 fused_ordering(250) 00:15:17.073 fused_ordering(251) 00:15:17.073 fused_ordering(252) 00:15:17.073 fused_ordering(253) 00:15:17.073 fused_ordering(254) 00:15:17.073 fused_ordering(255) 00:15:17.073 fused_ordering(256) 00:15:17.073 fused_ordering(257) 00:15:17.073 fused_ordering(258) 00:15:17.073 fused_ordering(259) 00:15:17.073 fused_ordering(260) 00:15:17.073 fused_ordering(261) 00:15:17.073 fused_ordering(262) 00:15:17.073 fused_ordering(263) 00:15:17.073 fused_ordering(264) 00:15:17.073 fused_ordering(265) 00:15:17.073 fused_ordering(266) 00:15:17.073 fused_ordering(267) 00:15:17.073 fused_ordering(268) 00:15:17.073 fused_ordering(269) 00:15:17.073 fused_ordering(270) 00:15:17.073 fused_ordering(271) 00:15:17.073 fused_ordering(272) 00:15:17.073 fused_ordering(273) 00:15:17.073 fused_ordering(274) 00:15:17.073 fused_ordering(275) 00:15:17.073 fused_ordering(276) 00:15:17.073 fused_ordering(277) 00:15:17.073 fused_ordering(278) 00:15:17.073 fused_ordering(279) 00:15:17.073 fused_ordering(280) 00:15:17.073 fused_ordering(281) 00:15:17.073 fused_ordering(282) 00:15:17.073 fused_ordering(283) 00:15:17.073 fused_ordering(284) 00:15:17.073 fused_ordering(285) 00:15:17.073 fused_ordering(286) 00:15:17.073 fused_ordering(287) 00:15:17.073 fused_ordering(288) 00:15:17.073 fused_ordering(289) 00:15:17.073 fused_ordering(290) 00:15:17.073 fused_ordering(291) 00:15:17.073 fused_ordering(292) 00:15:17.073 fused_ordering(293) 00:15:17.073 fused_ordering(294) 00:15:17.073 fused_ordering(295) 00:15:17.073 fused_ordering(296) 00:15:17.073 fused_ordering(297) 00:15:17.073 fused_ordering(298) 00:15:17.073 fused_ordering(299) 00:15:17.073 fused_ordering(300) 00:15:17.073 fused_ordering(301) 00:15:17.073 fused_ordering(302) 00:15:17.073 fused_ordering(303) 00:15:17.073 fused_ordering(304) 00:15:17.073 fused_ordering(305) 00:15:17.073 fused_ordering(306) 00:15:17.073 fused_ordering(307) 00:15:17.073 fused_ordering(308) 00:15:17.073 fused_ordering(309) 00:15:17.073 fused_ordering(310) 00:15:17.073 fused_ordering(311) 00:15:17.073 fused_ordering(312) 00:15:17.073 fused_ordering(313) 00:15:17.073 fused_ordering(314) 00:15:17.073 fused_ordering(315) 00:15:17.073 fused_ordering(316) 00:15:17.073 fused_ordering(317) 00:15:17.073 fused_ordering(318) 00:15:17.073 fused_ordering(319) 00:15:17.073 fused_ordering(320) 00:15:17.073 fused_ordering(321) 00:15:17.073 fused_ordering(322) 00:15:17.073 fused_ordering(323) 00:15:17.073 fused_ordering(324) 00:15:17.073 fused_ordering(325) 00:15:17.073 fused_ordering(326) 00:15:17.073 fused_ordering(327) 00:15:17.073 fused_ordering(328) 00:15:17.073 fused_ordering(329) 00:15:17.073 fused_ordering(330) 00:15:17.073 fused_ordering(331) 00:15:17.073 fused_ordering(332) 00:15:17.073 fused_ordering(333) 00:15:17.073 fused_ordering(334) 00:15:17.073 fused_ordering(335) 00:15:17.073 fused_ordering(336) 00:15:17.073 fused_ordering(337) 00:15:17.073 fused_ordering(338) 00:15:17.073 fused_ordering(339) 00:15:17.073 fused_ordering(340) 00:15:17.073 fused_ordering(341) 00:15:17.073 fused_ordering(342) 00:15:17.073 fused_ordering(343) 00:15:17.073 fused_ordering(344) 00:15:17.073 fused_ordering(345) 00:15:17.073 fused_ordering(346) 00:15:17.073 fused_ordering(347) 00:15:17.073 fused_ordering(348) 00:15:17.073 fused_ordering(349) 00:15:17.073 fused_ordering(350) 00:15:17.073 fused_ordering(351) 00:15:17.073 fused_ordering(352) 00:15:17.073 fused_ordering(353) 00:15:17.073 fused_ordering(354) 00:15:17.073 fused_ordering(355) 00:15:17.073 fused_ordering(356) 00:15:17.073 fused_ordering(357) 00:15:17.073 fused_ordering(358) 00:15:17.073 fused_ordering(359) 00:15:17.073 fused_ordering(360) 00:15:17.073 fused_ordering(361) 00:15:17.073 fused_ordering(362) 00:15:17.073 fused_ordering(363) 00:15:17.073 fused_ordering(364) 00:15:17.073 fused_ordering(365) 00:15:17.073 fused_ordering(366) 00:15:17.073 fused_ordering(367) 00:15:17.073 fused_ordering(368) 00:15:17.073 fused_ordering(369) 00:15:17.073 fused_ordering(370) 00:15:17.073 fused_ordering(371) 00:15:17.073 fused_ordering(372) 00:15:17.073 fused_ordering(373) 00:15:17.073 fused_ordering(374) 00:15:17.073 fused_ordering(375) 00:15:17.073 fused_ordering(376) 00:15:17.073 fused_ordering(377) 00:15:17.073 fused_ordering(378) 00:15:17.073 fused_ordering(379) 00:15:17.073 fused_ordering(380) 00:15:17.073 fused_ordering(381) 00:15:17.073 fused_ordering(382) 00:15:17.073 fused_ordering(383) 00:15:17.073 fused_ordering(384) 00:15:17.073 fused_ordering(385) 00:15:17.073 fused_ordering(386) 00:15:17.073 fused_ordering(387) 00:15:17.073 fused_ordering(388) 00:15:17.073 fused_ordering(389) 00:15:17.073 fused_ordering(390) 00:15:17.073 fused_ordering(391) 00:15:17.073 fused_ordering(392) 00:15:17.073 fused_ordering(393) 00:15:17.073 fused_ordering(394) 00:15:17.073 fused_ordering(395) 00:15:17.073 fused_ordering(396) 00:15:17.073 fused_ordering(397) 00:15:17.073 fused_ordering(398) 00:15:17.073 fused_ordering(399) 00:15:17.073 fused_ordering(400) 00:15:17.073 fused_ordering(401) 00:15:17.073 fused_ordering(402) 00:15:17.073 fused_ordering(403) 00:15:17.073 fused_ordering(404) 00:15:17.073 fused_ordering(405) 00:15:17.073 fused_ordering(406) 00:15:17.073 fused_ordering(407) 00:15:17.073 fused_ordering(408) 00:15:17.073 fused_ordering(409) 00:15:17.073 fused_ordering(410) 00:15:17.334 fused_ordering(411) 00:15:17.334 fused_ordering(412) 00:15:17.334 fused_ordering(413) 00:15:17.334 fused_ordering(414) 00:15:17.334 fused_ordering(415) 00:15:17.334 fused_ordering(416) 00:15:17.334 fused_ordering(417) 00:15:17.334 fused_ordering(418) 00:15:17.334 fused_ordering(419) 00:15:17.334 fused_ordering(420) 00:15:17.334 fused_ordering(421) 00:15:17.334 fused_ordering(422) 00:15:17.334 fused_ordering(423) 00:15:17.334 fused_ordering(424) 00:15:17.334 fused_ordering(425) 00:15:17.334 fused_ordering(426) 00:15:17.334 fused_ordering(427) 00:15:17.334 fused_ordering(428) 00:15:17.334 fused_ordering(429) 00:15:17.334 fused_ordering(430) 00:15:17.334 fused_ordering(431) 00:15:17.334 fused_ordering(432) 00:15:17.334 fused_ordering(433) 00:15:17.334 fused_ordering(434) 00:15:17.334 fused_ordering(435) 00:15:17.334 fused_ordering(436) 00:15:17.334 fused_ordering(437) 00:15:17.334 fused_ordering(438) 00:15:17.334 fused_ordering(439) 00:15:17.334 fused_ordering(440) 00:15:17.334 fused_ordering(441) 00:15:17.334 fused_ordering(442) 00:15:17.334 fused_ordering(443) 00:15:17.334 fused_ordering(444) 00:15:17.334 fused_ordering(445) 00:15:17.334 fused_ordering(446) 00:15:17.334 fused_ordering(447) 00:15:17.334 fused_ordering(448) 00:15:17.334 fused_ordering(449) 00:15:17.334 fused_ordering(450) 00:15:17.334 fused_ordering(451) 00:15:17.334 fused_ordering(452) 00:15:17.334 fused_ordering(453) 00:15:17.334 fused_ordering(454) 00:15:17.334 fused_ordering(455) 00:15:17.334 fused_ordering(456) 00:15:17.334 fused_ordering(457) 00:15:17.334 fused_ordering(458) 00:15:17.334 fused_ordering(459) 00:15:17.334 fused_ordering(460) 00:15:17.334 fused_ordering(461) 00:15:17.334 fused_ordering(462) 00:15:17.334 fused_ordering(463) 00:15:17.334 fused_ordering(464) 00:15:17.334 fused_ordering(465) 00:15:17.334 fused_ordering(466) 00:15:17.335 fused_ordering(467) 00:15:17.335 fused_ordering(468) 00:15:17.335 fused_ordering(469) 00:15:17.335 fused_ordering(470) 00:15:17.335 fused_ordering(471) 00:15:17.335 fused_ordering(472) 00:15:17.335 fused_ordering(473) 00:15:17.335 fused_ordering(474) 00:15:17.335 fused_ordering(475) 00:15:17.335 fused_ordering(476) 00:15:17.335 fused_ordering(477) 00:15:17.335 fused_ordering(478) 00:15:17.335 fused_ordering(479) 00:15:17.335 fused_ordering(480) 00:15:17.335 fused_ordering(481) 00:15:17.335 fused_ordering(482) 00:15:17.335 fused_ordering(483) 00:15:17.335 fused_ordering(484) 00:15:17.335 fused_ordering(485) 00:15:17.335 fused_ordering(486) 00:15:17.335 fused_ordering(487) 00:15:17.335 fused_ordering(488) 00:15:17.335 fused_ordering(489) 00:15:17.335 fused_ordering(490) 00:15:17.335 fused_ordering(491) 00:15:17.335 fused_ordering(492) 00:15:17.335 fused_ordering(493) 00:15:17.335 fused_ordering(494) 00:15:17.335 fused_ordering(495) 00:15:17.335 fused_ordering(496) 00:15:17.335 fused_ordering(497) 00:15:17.335 fused_ordering(498) 00:15:17.335 fused_ordering(499) 00:15:17.335 fused_ordering(500) 00:15:17.335 fused_ordering(501) 00:15:17.335 fused_ordering(502) 00:15:17.335 fused_ordering(503) 00:15:17.335 fused_ordering(504) 00:15:17.335 fused_ordering(505) 00:15:17.335 fused_ordering(506) 00:15:17.335 fused_ordering(507) 00:15:17.335 fused_ordering(508) 00:15:17.335 fused_ordering(509) 00:15:17.335 fused_ordering(510) 00:15:17.335 fused_ordering(511) 00:15:17.335 fused_ordering(512) 00:15:17.335 fused_ordering(513) 00:15:17.335 fused_ordering(514) 00:15:17.335 fused_ordering(515) 00:15:17.335 fused_ordering(516) 00:15:17.335 fused_ordering(517) 00:15:17.335 fused_ordering(518) 00:15:17.335 fused_ordering(519) 00:15:17.335 fused_ordering(520) 00:15:17.335 fused_ordering(521) 00:15:17.335 fused_ordering(522) 00:15:17.335 fused_ordering(523) 00:15:17.335 fused_ordering(524) 00:15:17.335 fused_ordering(525) 00:15:17.335 fused_ordering(526) 00:15:17.335 fused_ordering(527) 00:15:17.335 fused_ordering(528) 00:15:17.335 fused_ordering(529) 00:15:17.335 fused_ordering(530) 00:15:17.335 fused_ordering(531) 00:15:17.335 fused_ordering(532) 00:15:17.335 fused_ordering(533) 00:15:17.335 fused_ordering(534) 00:15:17.335 fused_ordering(535) 00:15:17.335 fused_ordering(536) 00:15:17.335 fused_ordering(537) 00:15:17.335 fused_ordering(538) 00:15:17.335 fused_ordering(539) 00:15:17.335 fused_ordering(540) 00:15:17.335 fused_ordering(541) 00:15:17.335 fused_ordering(542) 00:15:17.335 fused_ordering(543) 00:15:17.335 fused_ordering(544) 00:15:17.335 fused_ordering(545) 00:15:17.335 fused_ordering(546) 00:15:17.335 fused_ordering(547) 00:15:17.335 fused_ordering(548) 00:15:17.335 fused_ordering(549) 00:15:17.335 fused_ordering(550) 00:15:17.335 fused_ordering(551) 00:15:17.335 fused_ordering(552) 00:15:17.335 fused_ordering(553) 00:15:17.335 fused_ordering(554) 00:15:17.335 fused_ordering(555) 00:15:17.335 fused_ordering(556) 00:15:17.335 fused_ordering(557) 00:15:17.335 fused_ordering(558) 00:15:17.335 fused_ordering(559) 00:15:17.335 fused_ordering(560) 00:15:17.335 fused_ordering(561) 00:15:17.335 fused_ordering(562) 00:15:17.335 fused_ordering(563) 00:15:17.335 fused_ordering(564) 00:15:17.335 fused_ordering(565) 00:15:17.335 fused_ordering(566) 00:15:17.335 fused_ordering(567) 00:15:17.335 fused_ordering(568) 00:15:17.335 fused_ordering(569) 00:15:17.335 fused_ordering(570) 00:15:17.335 fused_ordering(571) 00:15:17.335 fused_ordering(572) 00:15:17.335 fused_ordering(573) 00:15:17.335 fused_ordering(574) 00:15:17.335 fused_ordering(575) 00:15:17.335 fused_ordering(576) 00:15:17.335 fused_ordering(577) 00:15:17.335 fused_ordering(578) 00:15:17.335 fused_ordering(579) 00:15:17.335 fused_ordering(580) 00:15:17.335 fused_ordering(581) 00:15:17.335 fused_ordering(582) 00:15:17.335 fused_ordering(583) 00:15:17.335 fused_ordering(584) 00:15:17.335 fused_ordering(585) 00:15:17.335 fused_ordering(586) 00:15:17.335 fused_ordering(587) 00:15:17.335 fused_ordering(588) 00:15:17.335 fused_ordering(589) 00:15:17.335 fused_ordering(590) 00:15:17.335 fused_ordering(591) 00:15:17.335 fused_ordering(592) 00:15:17.335 fused_ordering(593) 00:15:17.335 fused_ordering(594) 00:15:17.335 fused_ordering(595) 00:15:17.335 fused_ordering(596) 00:15:17.335 fused_ordering(597) 00:15:17.335 fused_ordering(598) 00:15:17.335 fused_ordering(599) 00:15:17.335 fused_ordering(600) 00:15:17.335 fused_ordering(601) 00:15:17.335 fused_ordering(602) 00:15:17.335 fused_ordering(603) 00:15:17.335 fused_ordering(604) 00:15:17.335 fused_ordering(605) 00:15:17.335 fused_ordering(606) 00:15:17.335 fused_ordering(607) 00:15:17.335 fused_ordering(608) 00:15:17.335 fused_ordering(609) 00:15:17.335 fused_ordering(610) 00:15:17.335 fused_ordering(611) 00:15:17.335 fused_ordering(612) 00:15:17.335 fused_ordering(613) 00:15:17.335 fused_ordering(614) 00:15:17.335 fused_ordering(615) 00:15:17.907 fused_ordering(616) 00:15:17.907 fused_ordering(617) 00:15:17.907 fused_ordering(618) 00:15:17.907 fused_ordering(619) 00:15:17.907 fused_ordering(620) 00:15:17.907 fused_ordering(621) 00:15:17.907 fused_ordering(622) 00:15:17.907 fused_ordering(623) 00:15:17.907 fused_ordering(624) 00:15:17.907 fused_ordering(625) 00:15:17.907 fused_ordering(626) 00:15:17.907 fused_ordering(627) 00:15:17.907 fused_ordering(628) 00:15:17.907 fused_ordering(629) 00:15:17.907 fused_ordering(630) 00:15:17.907 fused_ordering(631) 00:15:17.907 fused_ordering(632) 00:15:17.907 fused_ordering(633) 00:15:17.907 fused_ordering(634) 00:15:17.907 fused_ordering(635) 00:15:17.907 fused_ordering(636) 00:15:17.907 fused_ordering(637) 00:15:17.907 fused_ordering(638) 00:15:17.907 fused_ordering(639) 00:15:17.907 fused_ordering(640) 00:15:17.907 fused_ordering(641) 00:15:17.907 fused_ordering(642) 00:15:17.907 fused_ordering(643) 00:15:17.907 fused_ordering(644) 00:15:17.907 fused_ordering(645) 00:15:17.907 fused_ordering(646) 00:15:17.907 fused_ordering(647) 00:15:17.907 fused_ordering(648) 00:15:17.907 fused_ordering(649) 00:15:17.907 fused_ordering(650) 00:15:17.907 fused_ordering(651) 00:15:17.907 fused_ordering(652) 00:15:17.907 fused_ordering(653) 00:15:17.907 fused_ordering(654) 00:15:17.907 fused_ordering(655) 00:15:17.907 fused_ordering(656) 00:15:17.907 fused_ordering(657) 00:15:17.907 fused_ordering(658) 00:15:17.907 fused_ordering(659) 00:15:17.907 fused_ordering(660) 00:15:17.907 fused_ordering(661) 00:15:17.907 fused_ordering(662) 00:15:17.907 fused_ordering(663) 00:15:17.907 fused_ordering(664) 00:15:17.907 fused_ordering(665) 00:15:17.907 fused_ordering(666) 00:15:17.907 fused_ordering(667) 00:15:17.907 fused_ordering(668) 00:15:17.907 fused_ordering(669) 00:15:17.907 fused_ordering(670) 00:15:17.907 fused_ordering(671) 00:15:17.907 fused_ordering(672) 00:15:17.907 fused_ordering(673) 00:15:17.907 fused_ordering(674) 00:15:17.907 fused_ordering(675) 00:15:17.907 fused_ordering(676) 00:15:17.907 fused_ordering(677) 00:15:17.907 fused_ordering(678) 00:15:17.907 fused_ordering(679) 00:15:17.907 fused_ordering(680) 00:15:17.907 fused_ordering(681) 00:15:17.907 fused_ordering(682) 00:15:17.907 fused_ordering(683) 00:15:17.907 fused_ordering(684) 00:15:17.907 fused_ordering(685) 00:15:17.907 fused_ordering(686) 00:15:17.907 fused_ordering(687) 00:15:17.907 fused_ordering(688) 00:15:17.907 fused_ordering(689) 00:15:17.907 fused_ordering(690) 00:15:17.907 fused_ordering(691) 00:15:17.907 fused_ordering(692) 00:15:17.907 fused_ordering(693) 00:15:17.907 fused_ordering(694) 00:15:17.907 fused_ordering(695) 00:15:17.907 fused_ordering(696) 00:15:17.907 fused_ordering(697) 00:15:17.907 fused_ordering(698) 00:15:17.907 fused_ordering(699) 00:15:17.907 fused_ordering(700) 00:15:17.907 fused_ordering(701) 00:15:17.907 fused_ordering(702) 00:15:17.907 fused_ordering(703) 00:15:17.907 fused_ordering(704) 00:15:17.907 fused_ordering(705) 00:15:17.907 fused_ordering(706) 00:15:17.907 fused_ordering(707) 00:15:17.907 fused_ordering(708) 00:15:17.907 fused_ordering(709) 00:15:17.907 fused_ordering(710) 00:15:17.907 fused_ordering(711) 00:15:17.907 fused_ordering(712) 00:15:17.907 fused_ordering(713) 00:15:17.907 fused_ordering(714) 00:15:17.907 fused_ordering(715) 00:15:17.907 fused_ordering(716) 00:15:17.907 fused_ordering(717) 00:15:17.907 fused_ordering(718) 00:15:17.907 fused_ordering(719) 00:15:17.907 fused_ordering(720) 00:15:17.907 fused_ordering(721) 00:15:17.907 fused_ordering(722) 00:15:17.907 fused_ordering(723) 00:15:17.907 fused_ordering(724) 00:15:17.907 fused_ordering(725) 00:15:17.907 fused_ordering(726) 00:15:17.907 fused_ordering(727) 00:15:17.907 fused_ordering(728) 00:15:17.907 fused_ordering(729) 00:15:17.907 fused_ordering(730) 00:15:17.907 fused_ordering(731) 00:15:17.907 fused_ordering(732) 00:15:17.907 fused_ordering(733) 00:15:17.907 fused_ordering(734) 00:15:17.907 fused_ordering(735) 00:15:17.907 fused_ordering(736) 00:15:17.907 fused_ordering(737) 00:15:17.907 fused_ordering(738) 00:15:17.907 fused_ordering(739) 00:15:17.907 fused_ordering(740) 00:15:17.907 fused_ordering(741) 00:15:17.907 fused_ordering(742) 00:15:17.907 fused_ordering(743) 00:15:17.907 fused_ordering(744) 00:15:17.907 fused_ordering(745) 00:15:17.907 fused_ordering(746) 00:15:17.907 fused_ordering(747) 00:15:17.907 fused_ordering(748) 00:15:17.907 fused_ordering(749) 00:15:17.907 fused_ordering(750) 00:15:17.907 fused_ordering(751) 00:15:17.907 fused_ordering(752) 00:15:17.907 fused_ordering(753) 00:15:17.907 fused_ordering(754) 00:15:17.907 fused_ordering(755) 00:15:17.907 fused_ordering(756) 00:15:17.907 fused_ordering(757) 00:15:17.907 fused_ordering(758) 00:15:17.907 fused_ordering(759) 00:15:17.907 fused_ordering(760) 00:15:17.907 fused_ordering(761) 00:15:17.907 fused_ordering(762) 00:15:17.907 fused_ordering(763) 00:15:17.907 fused_ordering(764) 00:15:17.907 fused_ordering(765) 00:15:17.907 fused_ordering(766) 00:15:17.907 fused_ordering(767) 00:15:17.907 fused_ordering(768) 00:15:17.907 fused_ordering(769) 00:15:17.907 fused_ordering(770) 00:15:17.907 fused_ordering(771) 00:15:17.907 fused_ordering(772) 00:15:17.907 fused_ordering(773) 00:15:17.907 fused_ordering(774) 00:15:17.907 fused_ordering(775) 00:15:17.907 fused_ordering(776) 00:15:17.907 fused_ordering(777) 00:15:17.907 fused_ordering(778) 00:15:17.907 fused_ordering(779) 00:15:17.907 fused_ordering(780) 00:15:17.907 fused_ordering(781) 00:15:17.907 fused_ordering(782) 00:15:17.907 fused_ordering(783) 00:15:17.907 fused_ordering(784) 00:15:17.907 fused_ordering(785) 00:15:17.907 fused_ordering(786) 00:15:17.907 fused_ordering(787) 00:15:17.907 fused_ordering(788) 00:15:17.907 fused_ordering(789) 00:15:17.907 fused_ordering(790) 00:15:17.907 fused_ordering(791) 00:15:17.907 fused_ordering(792) 00:15:17.907 fused_ordering(793) 00:15:17.907 fused_ordering(794) 00:15:17.907 fused_ordering(795) 00:15:17.907 fused_ordering(796) 00:15:17.907 fused_ordering(797) 00:15:17.907 fused_ordering(798) 00:15:17.907 fused_ordering(799) 00:15:17.907 fused_ordering(800) 00:15:17.907 fused_ordering(801) 00:15:17.907 fused_ordering(802) 00:15:17.907 fused_ordering(803) 00:15:17.907 fused_ordering(804) 00:15:17.907 fused_ordering(805) 00:15:17.907 fused_ordering(806) 00:15:17.907 fused_ordering(807) 00:15:17.907 fused_ordering(808) 00:15:17.907 fused_ordering(809) 00:15:17.907 fused_ordering(810) 00:15:17.907 fused_ordering(811) 00:15:17.907 fused_ordering(812) 00:15:17.907 fused_ordering(813) 00:15:17.907 fused_ordering(814) 00:15:17.907 fused_ordering(815) 00:15:17.907 fused_ordering(816) 00:15:17.907 fused_ordering(817) 00:15:17.907 fused_ordering(818) 00:15:17.907 fused_ordering(819) 00:15:17.907 fused_ordering(820) 00:15:18.851 fused_ordering(821) 00:15:18.851 fused_ordering(822) 00:15:18.851 fused_ordering(823) 00:15:18.851 fused_ordering(824) 00:15:18.851 fused_ordering(825) 00:15:18.851 fused_ordering(826) 00:15:18.851 fused_ordering(827) 00:15:18.851 fused_ordering(828) 00:15:18.851 fused_ordering(829) 00:15:18.851 fused_ordering(830) 00:15:18.851 fused_ordering(831) 00:15:18.851 fused_ordering(832) 00:15:18.851 fused_ordering(833) 00:15:18.851 fused_ordering(834) 00:15:18.851 fused_ordering(835) 00:15:18.851 fused_ordering(836) 00:15:18.851 fused_ordering(837) 00:15:18.851 fused_ordering(838) 00:15:18.851 fused_ordering(839) 00:15:18.851 fused_ordering(840) 00:15:18.851 fused_ordering(841) 00:15:18.851 fused_ordering(842) 00:15:18.851 fused_ordering(843) 00:15:18.851 fused_ordering(844) 00:15:18.851 fused_ordering(845) 00:15:18.851 fused_ordering(846) 00:15:18.851 fused_ordering(847) 00:15:18.851 fused_ordering(848) 00:15:18.851 fused_ordering(849) 00:15:18.851 fused_ordering(850) 00:15:18.851 fused_ordering(851) 00:15:18.851 fused_ordering(852) 00:15:18.851 fused_ordering(853) 00:15:18.851 fused_ordering(854) 00:15:18.851 fused_ordering(855) 00:15:18.851 fused_ordering(856) 00:15:18.851 fused_ordering(857) 00:15:18.851 fused_ordering(858) 00:15:18.851 fused_ordering(859) 00:15:18.851 fused_ordering(860) 00:15:18.851 fused_ordering(861) 00:15:18.851 fused_ordering(862) 00:15:18.851 fused_ordering(863) 00:15:18.851 fused_ordering(864) 00:15:18.851 fused_ordering(865) 00:15:18.851 fused_ordering(866) 00:15:18.851 fused_ordering(867) 00:15:18.851 fused_ordering(868) 00:15:18.851 fused_ordering(869) 00:15:18.851 fused_ordering(870) 00:15:18.851 fused_ordering(871) 00:15:18.851 fused_ordering(872) 00:15:18.851 fused_ordering(873) 00:15:18.851 fused_ordering(874) 00:15:18.851 fused_ordering(875) 00:15:18.851 fused_ordering(876) 00:15:18.851 fused_ordering(877) 00:15:18.851 fused_ordering(878) 00:15:18.851 fused_ordering(879) 00:15:18.851 fused_ordering(880) 00:15:18.851 fused_ordering(881) 00:15:18.851 fused_ordering(882) 00:15:18.851 fused_ordering(883) 00:15:18.851 fused_ordering(884) 00:15:18.851 fused_ordering(885) 00:15:18.851 fused_ordering(886) 00:15:18.851 fused_ordering(887) 00:15:18.851 fused_ordering(888) 00:15:18.851 fused_ordering(889) 00:15:18.851 fused_ordering(890) 00:15:18.851 fused_ordering(891) 00:15:18.851 fused_ordering(892) 00:15:18.851 fused_ordering(893) 00:15:18.851 fused_ordering(894) 00:15:18.851 fused_ordering(895) 00:15:18.851 fused_ordering(896) 00:15:18.851 fused_ordering(897) 00:15:18.851 fused_ordering(898) 00:15:18.851 fused_ordering(899) 00:15:18.851 fused_ordering(900) 00:15:18.851 fused_ordering(901) 00:15:18.851 fused_ordering(902) 00:15:18.851 fused_ordering(903) 00:15:18.851 fused_ordering(904) 00:15:18.851 fused_ordering(905) 00:15:18.851 fused_ordering(906) 00:15:18.851 fused_ordering(907) 00:15:18.851 fused_ordering(908) 00:15:18.851 fused_ordering(909) 00:15:18.851 fused_ordering(910) 00:15:18.851 fused_ordering(911) 00:15:18.851 fused_ordering(912) 00:15:18.851 fused_ordering(913) 00:15:18.851 fused_ordering(914) 00:15:18.851 fused_ordering(915) 00:15:18.851 fused_ordering(916) 00:15:18.851 fused_ordering(917) 00:15:18.851 fused_ordering(918) 00:15:18.851 fused_ordering(919) 00:15:18.851 fused_ordering(920) 00:15:18.851 fused_ordering(921) 00:15:18.851 fused_ordering(922) 00:15:18.851 fused_ordering(923) 00:15:18.851 fused_ordering(924) 00:15:18.851 fused_ordering(925) 00:15:18.851 fused_ordering(926) 00:15:18.851 fused_ordering(927) 00:15:18.851 fused_ordering(928) 00:15:18.851 fused_ordering(929) 00:15:18.851 fused_ordering(930) 00:15:18.851 fused_ordering(931) 00:15:18.851 fused_ordering(932) 00:15:18.851 fused_ordering(933) 00:15:18.851 fused_ordering(934) 00:15:18.851 fused_ordering(935) 00:15:18.851 fused_ordering(936) 00:15:18.851 fused_ordering(937) 00:15:18.851 fused_ordering(938) 00:15:18.851 fused_ordering(939) 00:15:18.851 fused_ordering(940) 00:15:18.851 fused_ordering(941) 00:15:18.851 fused_ordering(942) 00:15:18.851 fused_ordering(943) 00:15:18.851 fused_ordering(944) 00:15:18.851 fused_ordering(945) 00:15:18.851 fused_ordering(946) 00:15:18.852 fused_ordering(947) 00:15:18.852 fused_ordering(948) 00:15:18.852 fused_ordering(949) 00:15:18.852 fused_ordering(950) 00:15:18.852 fused_ordering(951) 00:15:18.852 fused_ordering(952) 00:15:18.852 fused_ordering(953) 00:15:18.852 fused_ordering(954) 00:15:18.852 fused_ordering(955) 00:15:18.852 fused_ordering(956) 00:15:18.852 fused_ordering(957) 00:15:18.852 fused_ordering(958) 00:15:18.852 fused_ordering(959) 00:15:18.852 fused_ordering(960) 00:15:18.852 fused_ordering(961) 00:15:18.852 fused_ordering(962) 00:15:18.852 fused_ordering(963) 00:15:18.852 fused_ordering(964) 00:15:18.852 fused_ordering(965) 00:15:18.852 fused_ordering(966) 00:15:18.852 fused_ordering(967) 00:15:18.852 fused_ordering(968) 00:15:18.852 fused_ordering(969) 00:15:18.852 fused_ordering(970) 00:15:18.852 fused_ordering(971) 00:15:18.852 fused_ordering(972) 00:15:18.852 fused_ordering(973) 00:15:18.852 fused_ordering(974) 00:15:18.852 fused_ordering(975) 00:15:18.852 fused_ordering(976) 00:15:18.852 fused_ordering(977) 00:15:18.852 fused_ordering(978) 00:15:18.852 fused_ordering(979) 00:15:18.852 fused_ordering(980) 00:15:18.852 fused_ordering(981) 00:15:18.852 fused_ordering(982) 00:15:18.852 fused_ordering(983) 00:15:18.852 fused_ordering(984) 00:15:18.852 fused_ordering(985) 00:15:18.852 fused_ordering(986) 00:15:18.852 fused_ordering(987) 00:15:18.852 fused_ordering(988) 00:15:18.852 fused_ordering(989) 00:15:18.852 fused_ordering(990) 00:15:18.852 fused_ordering(991) 00:15:18.852 fused_ordering(992) 00:15:18.852 fused_ordering(993) 00:15:18.852 fused_ordering(994) 00:15:18.852 fused_ordering(995) 00:15:18.852 fused_ordering(996) 00:15:18.852 fused_ordering(997) 00:15:18.852 fused_ordering(998) 00:15:18.852 fused_ordering(999) 00:15:18.852 fused_ordering(1000) 00:15:18.852 fused_ordering(1001) 00:15:18.852 fused_ordering(1002) 00:15:18.852 fused_ordering(1003) 00:15:18.852 fused_ordering(1004) 00:15:18.852 fused_ordering(1005) 00:15:18.852 fused_ordering(1006) 00:15:18.852 fused_ordering(1007) 00:15:18.852 fused_ordering(1008) 00:15:18.852 fused_ordering(1009) 00:15:18.852 fused_ordering(1010) 00:15:18.852 fused_ordering(1011) 00:15:18.852 fused_ordering(1012) 00:15:18.852 fused_ordering(1013) 00:15:18.852 fused_ordering(1014) 00:15:18.852 fused_ordering(1015) 00:15:18.852 fused_ordering(1016) 00:15:18.852 fused_ordering(1017) 00:15:18.852 fused_ordering(1018) 00:15:18.852 fused_ordering(1019) 00:15:18.852 fused_ordering(1020) 00:15:18.852 fused_ordering(1021) 00:15:18.852 fused_ordering(1022) 00:15:18.852 fused_ordering(1023) 00:15:18.852 11:53:12 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:18.852 11:53:12 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:18.852 11:53:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:18.852 11:53:12 -- nvmf/common.sh@116 -- # sync 00:15:18.852 11:53:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:18.852 11:53:12 -- nvmf/common.sh@119 -- # set +e 00:15:18.852 11:53:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:18.852 11:53:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:18.852 rmmod nvme_tcp 00:15:18.852 rmmod nvme_fabrics 00:15:18.852 rmmod nvme_keyring 00:15:18.852 11:53:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:18.852 11:53:12 -- nvmf/common.sh@123 -- # set -e 00:15:18.852 11:53:12 -- nvmf/common.sh@124 -- # return 0 00:15:18.852 11:53:12 -- nvmf/common.sh@477 -- # '[' -n 1880062 ']' 00:15:18.852 11:53:12 -- nvmf/common.sh@478 -- # killprocess 1880062 00:15:18.852 11:53:12 -- common/autotest_common.sh@926 -- # '[' -z 1880062 ']' 00:15:18.852 11:53:12 -- common/autotest_common.sh@930 -- # kill -0 1880062 00:15:18.852 11:53:12 -- common/autotest_common.sh@931 -- # uname 00:15:18.852 11:53:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:18.852 11:53:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1880062 00:15:18.852 11:53:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:18.852 11:53:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:18.852 11:53:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1880062' 00:15:18.852 killing process with pid 1880062 00:15:18.852 11:53:12 -- common/autotest_common.sh@945 -- # kill 1880062 00:15:18.852 11:53:12 -- common/autotest_common.sh@950 -- # wait 1880062 00:15:18.852 11:53:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:18.852 11:53:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:18.852 11:53:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:18.852 11:53:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.852 11:53:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:18.852 11:53:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.852 11:53:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.852 11:53:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.398 11:53:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:21.398 00:15:21.398 real 0m13.141s 00:15:21.398 user 0m7.117s 00:15:21.398 sys 0m6.876s 00:15:21.398 11:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.398 11:53:14 -- common/autotest_common.sh@10 -- # set +x 00:15:21.398 ************************************ 00:15:21.398 END TEST nvmf_fused_ordering 00:15:21.398 ************************************ 00:15:21.398 11:53:14 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:21.398 11:53:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:21.398 11:53:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:21.398 11:53:14 -- common/autotest_common.sh@10 -- # set +x 00:15:21.398 ************************************ 00:15:21.398 START TEST nvmf_delete_subsystem 00:15:21.398 ************************************ 00:15:21.398 11:53:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:21.398 * Looking for test storage... 00:15:21.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.398 11:53:14 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.398 11:53:14 -- nvmf/common.sh@7 -- # uname -s 00:15:21.398 11:53:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.398 11:53:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.398 11:53:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.398 11:53:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.398 11:53:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.398 11:53:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.398 11:53:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.398 11:53:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.398 11:53:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.398 11:53:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.398 11:53:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.398 11:53:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.398 11:53:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.398 11:53:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.398 11:53:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.398 11:53:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.398 11:53:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.398 11:53:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.398 11:53:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.398 11:53:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.398 11:53:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.398 11:53:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.398 11:53:14 -- paths/export.sh@5 -- # export PATH 00:15:21.398 11:53:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.398 11:53:14 -- nvmf/common.sh@46 -- # : 0 00:15:21.398 11:53:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:21.398 11:53:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:21.398 11:53:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:21.398 11:53:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.398 11:53:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.398 11:53:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:21.398 11:53:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:21.398 11:53:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:21.398 11:53:14 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:21.398 11:53:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:21.398 11:53:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.398 11:53:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:21.398 11:53:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:21.398 11:53:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:21.398 11:53:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.398 11:53:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.398 11:53:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.398 11:53:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:21.398 11:53:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:21.398 11:53:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:21.398 11:53:14 -- common/autotest_common.sh@10 -- # set +x 00:15:27.988 11:53:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:27.988 11:53:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:27.988 11:53:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:27.988 11:53:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:27.988 11:53:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:27.988 11:53:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:27.988 11:53:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:27.988 11:53:21 -- nvmf/common.sh@294 -- # net_devs=() 00:15:27.988 11:53:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:27.988 11:53:21 -- nvmf/common.sh@295 -- # e810=() 00:15:27.988 11:53:21 -- nvmf/common.sh@295 -- # local -ga e810 00:15:27.988 11:53:21 -- nvmf/common.sh@296 -- # x722=() 00:15:27.988 11:53:21 -- nvmf/common.sh@296 -- # local -ga x722 00:15:27.988 11:53:21 -- nvmf/common.sh@297 -- # mlx=() 00:15:27.988 11:53:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:27.988 11:53:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.988 11:53:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:27.988 11:53:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:27.988 11:53:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:27.988 11:53:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:27.988 11:53:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:27.988 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:27.988 11:53:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:27.988 11:53:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:27.988 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:27.988 11:53:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:27.988 11:53:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:27.988 11:53:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:27.988 11:53:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.988 11:53:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:27.988 11:53:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.988 11:53:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:27.989 Found net devices under 0000:31:00.0: cvl_0_0 00:15:27.989 11:53:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.989 11:53:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:27.989 11:53:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.989 11:53:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:27.989 11:53:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.989 11:53:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:27.989 Found net devices under 0000:31:00.1: cvl_0_1 00:15:27.989 11:53:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.989 11:53:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:27.989 11:53:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:27.989 11:53:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:27.989 11:53:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:27.989 11:53:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:27.989 11:53:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.989 11:53:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:27.989 11:53:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:27.989 11:53:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:27.989 11:53:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:27.989 11:53:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:27.989 11:53:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:27.989 11:53:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:27.989 11:53:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.989 11:53:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:27.989 11:53:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:27.989 11:53:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:27.989 11:53:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.250 11:53:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.250 11:53:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.250 11:53:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:28.250 11:53:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.250 11:53:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:28.511 11:53:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:28.511 11:53:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:28.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:15:28.511 00:15:28.511 --- 10.0.0.2 ping statistics --- 00:15:28.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.511 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:15:28.511 11:53:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:28.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:15:28.511 00:15:28.511 --- 10.0.0.1 ping statistics --- 00:15:28.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.511 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:15:28.511 11:53:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.511 11:53:22 -- nvmf/common.sh@410 -- # return 0 00:15:28.511 11:53:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:28.511 11:53:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.511 11:53:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:28.511 11:53:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:28.511 11:53:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.511 11:53:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:28.511 11:53:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:28.511 11:53:22 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:28.511 11:53:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:28.511 11:53:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:28.511 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:15:28.511 11:53:22 -- nvmf/common.sh@469 -- # nvmfpid=1885072 00:15:28.511 11:53:22 -- nvmf/common.sh@470 -- # waitforlisten 1885072 00:15:28.511 11:53:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:28.511 11:53:22 -- common/autotest_common.sh@819 -- # '[' -z 1885072 ']' 00:15:28.511 11:53:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.511 11:53:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.511 11:53:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.511 11:53:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.511 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:15:28.511 [2024-06-10 11:53:22.146190] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:28.511 [2024-06-10 11:53:22.146260] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.511 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.511 [2024-06-10 11:53:22.216908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:28.772 [2024-06-10 11:53:22.290253] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.772 [2024-06-10 11:53:22.290376] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.772 [2024-06-10 11:53:22.290385] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.772 [2024-06-10 11:53:22.290392] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.772 [2024-06-10 11:53:22.290533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.772 [2024-06-10 11:53:22.290535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.343 11:53:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.343 11:53:22 -- common/autotest_common.sh@852 -- # return 0 00:15:29.343 11:53:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:29.343 11:53:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:29.343 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:15:29.343 11:53:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.343 11:53:22 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.343 11:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.343 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:15:29.343 [2024-06-10 11:53:22.950388] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.343 11:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.343 11:53:22 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:29.343 11:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.343 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:15:29.343 11:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.343 11:53:22 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.343 11:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.343 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:15:29.343 [2024-06-10 11:53:22.974552] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.343 11:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.343 11:53:22 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:29.343 11:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.343 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:15:29.343 NULL1 00:15:29.343 11:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.343 11:53:22 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:29.343 11:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.343 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:15:29.343 Delay0 00:15:29.343 11:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.343 11:53:23 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.343 11:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.343 11:53:23 -- common/autotest_common.sh@10 -- # set +x 00:15:29.343 11:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.343 11:53:23 -- target/delete_subsystem.sh@28 -- # perf_pid=1885188 00:15:29.343 11:53:23 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:29.343 11:53:23 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:29.343 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.343 [2024-06-10 11:53:23.071249] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:31.332 11:53:25 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.332 11:53:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.332 11:53:25 -- common/autotest_common.sh@10 -- # set +x 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 [2024-06-10 11:53:25.114933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527a90 is same with the state(5) to be set 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 starting I/O failed: -6 00:15:31.622 starting I/O failed: -6 00:15:31.622 starting I/O failed: -6 00:15:31.622 starting I/O failed: -6 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.622 starting I/O failed: -6 00:15:31.622 Read completed with error (sct=0, sc=8) 00:15:31.622 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Write completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 Read completed with error (sct=0, sc=8) 00:15:31.623 starting I/O failed: -6 00:15:31.623 [2024-06-10 11:53:25.120634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0eec000c00 is same with the state(5) to be set 00:15:32.565 [2024-06-10 11:53:26.087385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5325e0 is same with the state(5) to be set 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 [2024-06-10 11:53:26.118365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5118b0 is same with the state(5) to be set 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 [2024-06-10 11:53:26.118788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527910 is same with the state(5) to be set 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 [2024-06-10 11:53:26.121983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0eec00bf20 is same with the state(5) to be set 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Write completed with error (sct=0, sc=8) 00:15:32.565 Read completed with error (sct=0, sc=8) 00:15:32.566 Read completed with error (sct=0, sc=8) 00:15:32.566 Write completed with error (sct=0, sc=8) 00:15:32.566 Read completed with error (sct=0, sc=8) 00:15:32.566 Write completed with error (sct=0, sc=8) 00:15:32.566 Read completed with error (sct=0, sc=8) 00:15:32.566 Read completed with error (sct=0, sc=8) 00:15:32.566 Read completed with error (sct=0, sc=8) 00:15:32.566 Read completed with error (sct=0, sc=8) 00:15:32.566 Read completed with error (sct=0, sc=8) 00:15:32.566 [2024-06-10 11:53:26.122116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0eec00c600 is same with the state(5) to be set 00:15:32.566 [2024-06-10 11:53:26.122695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5325e0 (9): Bad file descriptor 00:15:32.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:32.566 Initializing NVMe Controllers 00:15:32.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:32.566 Controller IO queue size 128, less than required. 00:15:32.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:32.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:32.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:32.566 Initialization complete. Launching workers. 00:15:32.566 ======================================================== 00:15:32.566 Latency(us) 00:15:32.566 Device Information : IOPS MiB/s Average min max 00:15:32.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.34 0.08 891952.84 208.63 1006277.58 00:15:32.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 186.77 0.09 905314.98 425.31 1011525.23 00:15:32.566 ======================================================== 00:15:32.566 Total : 357.11 0.17 898941.41 208.63 1011525.23 00:15:32.566 00:15:32.566 11:53:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.566 11:53:26 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:32.566 11:53:26 -- target/delete_subsystem.sh@35 -- # kill -0 1885188 00:15:32.566 11:53:26 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@35 -- # kill -0 1885188 00:15:33.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1885188) - No such process 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@45 -- # NOT wait 1885188 00:15:33.137 11:53:26 -- common/autotest_common.sh@640 -- # local es=0 00:15:33.137 11:53:26 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1885188 00:15:33.137 11:53:26 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:33.137 11:53:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:33.137 11:53:26 -- common/autotest_common.sh@632 -- # type -t wait 00:15:33.137 11:53:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:33.137 11:53:26 -- common/autotest_common.sh@643 -- # wait 1885188 00:15:33.137 11:53:26 -- common/autotest_common.sh@643 -- # es=1 00:15:33.137 11:53:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:33.137 11:53:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:33.137 11:53:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:33.137 11:53:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.137 11:53:26 -- common/autotest_common.sh@10 -- # set +x 00:15:33.137 11:53:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.137 11:53:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.137 11:53:26 -- common/autotest_common.sh@10 -- # set +x 00:15:33.137 [2024-06-10 11:53:26.655304] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.137 11:53:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.137 11:53:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.137 11:53:26 -- common/autotest_common.sh@10 -- # set +x 00:15:33.137 11:53:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@54 -- # perf_pid=1885880 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@57 -- # kill -0 1885880 00:15:33.137 11:53:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:33.137 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.137 [2024-06-10 11:53:26.721729] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:33.708 11:53:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:33.708 11:53:27 -- target/delete_subsystem.sh@57 -- # kill -0 1885880 00:15:33.708 11:53:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:33.969 11:53:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:33.969 11:53:27 -- target/delete_subsystem.sh@57 -- # kill -0 1885880 00:15:33.969 11:53:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:34.540 11:53:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:34.540 11:53:28 -- target/delete_subsystem.sh@57 -- # kill -0 1885880 00:15:34.540 11:53:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:35.112 11:53:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:35.112 11:53:28 -- target/delete_subsystem.sh@57 -- # kill -0 1885880 00:15:35.112 11:53:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:35.683 11:53:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:35.683 11:53:29 -- target/delete_subsystem.sh@57 -- # kill -0 1885880 00:15:35.683 11:53:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:35.944 11:53:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:35.944 11:53:29 -- target/delete_subsystem.sh@57 -- # kill -0 1885880 00:15:35.944 11:53:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:36.206 Initializing NVMe Controllers 00:15:36.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:36.206 Controller IO queue size 128, less than required. 00:15:36.206 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:36.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:36.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:36.206 Initialization complete. Launching workers. 00:15:36.206 ======================================================== 00:15:36.206 Latency(us) 00:15:36.206 Device Information : IOPS MiB/s Average min max 00:15:36.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002058.94 1000267.35 1006062.32 00:15:36.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002897.77 1000301.76 1009284.96 00:15:36.206 ======================================================== 00:15:36.206 Total : 256.00 0.12 1002478.35 1000267.35 1009284.96 00:15:36.206 00:15:36.467 11:53:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:36.468 11:53:30 -- target/delete_subsystem.sh@57 -- # kill -0 1885880 00:15:36.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1885880) - No such process 00:15:36.468 11:53:30 -- target/delete_subsystem.sh@67 -- # wait 1885880 00:15:36.468 11:53:30 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:36.468 11:53:30 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:36.468 11:53:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:36.468 11:53:30 -- nvmf/common.sh@116 -- # sync 00:15:36.468 11:53:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:36.468 11:53:30 -- nvmf/common.sh@119 -- # set +e 00:15:36.468 11:53:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:36.468 11:53:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:36.468 rmmod nvme_tcp 00:15:36.468 rmmod nvme_fabrics 00:15:36.729 rmmod nvme_keyring 00:15:36.729 11:53:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:36.729 11:53:30 -- nvmf/common.sh@123 -- # set -e 00:15:36.729 11:53:30 -- nvmf/common.sh@124 -- # return 0 00:15:36.729 11:53:30 -- nvmf/common.sh@477 -- # '[' -n 1885072 ']' 00:15:36.729 11:53:30 -- nvmf/common.sh@478 -- # killprocess 1885072 00:15:36.729 11:53:30 -- common/autotest_common.sh@926 -- # '[' -z 1885072 ']' 00:15:36.729 11:53:30 -- common/autotest_common.sh@930 -- # kill -0 1885072 00:15:36.729 11:53:30 -- common/autotest_common.sh@931 -- # uname 00:15:36.729 11:53:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:36.729 11:53:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1885072 00:15:36.729 11:53:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:36.729 11:53:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:36.729 11:53:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1885072' 00:15:36.729 killing process with pid 1885072 00:15:36.729 11:53:30 -- common/autotest_common.sh@945 -- # kill 1885072 00:15:36.729 11:53:30 -- common/autotest_common.sh@950 -- # wait 1885072 00:15:36.729 11:53:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:36.729 11:53:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:36.729 11:53:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:36.729 11:53:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.729 11:53:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:36.729 11:53:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.729 11:53:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.729 11:53:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.276 11:53:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:39.276 00:15:39.276 real 0m17.882s 00:15:39.276 user 0m30.425s 00:15:39.276 sys 0m6.112s 00:15:39.276 11:53:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.276 11:53:32 -- common/autotest_common.sh@10 -- # set +x 00:15:39.276 ************************************ 00:15:39.276 END TEST nvmf_delete_subsystem 00:15:39.276 ************************************ 00:15:39.276 11:53:32 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:39.276 11:53:32 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:39.276 11:53:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:39.276 11:53:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:39.276 11:53:32 -- common/autotest_common.sh@10 -- # set +x 00:15:39.276 ************************************ 00:15:39.276 START TEST nvmf_nvme_cli 00:15:39.276 ************************************ 00:15:39.276 11:53:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:39.276 * Looking for test storage... 00:15:39.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.276 11:53:32 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.276 11:53:32 -- nvmf/common.sh@7 -- # uname -s 00:15:39.276 11:53:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.276 11:53:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.276 11:53:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.276 11:53:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.276 11:53:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.276 11:53:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.276 11:53:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.276 11:53:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.276 11:53:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.276 11:53:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.276 11:53:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.276 11:53:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.276 11:53:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.276 11:53:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.276 11:53:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.276 11:53:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.276 11:53:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.276 11:53:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.276 11:53:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.276 11:53:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.276 11:53:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.276 11:53:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.276 11:53:32 -- paths/export.sh@5 -- # export PATH 00:15:39.276 11:53:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.276 11:53:32 -- nvmf/common.sh@46 -- # : 0 00:15:39.276 11:53:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:39.276 11:53:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:39.276 11:53:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:39.276 11:53:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.276 11:53:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.276 11:53:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:39.276 11:53:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:39.276 11:53:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:39.276 11:53:32 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.276 11:53:32 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.276 11:53:32 -- target/nvme_cli.sh@14 -- # devs=() 00:15:39.276 11:53:32 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:39.276 11:53:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:39.276 11:53:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.276 11:53:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:39.276 11:53:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:39.276 11:53:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:39.276 11:53:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.276 11:53:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.276 11:53:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.276 11:53:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:39.276 11:53:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:39.276 11:53:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:39.276 11:53:32 -- common/autotest_common.sh@10 -- # set +x 00:15:47.421 11:53:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:47.421 11:53:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:47.421 11:53:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:47.421 11:53:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:47.421 11:53:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:47.421 11:53:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:47.421 11:53:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:47.421 11:53:39 -- nvmf/common.sh@294 -- # net_devs=() 00:15:47.421 11:53:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:47.421 11:53:39 -- nvmf/common.sh@295 -- # e810=() 00:15:47.421 11:53:39 -- nvmf/common.sh@295 -- # local -ga e810 00:15:47.421 11:53:39 -- nvmf/common.sh@296 -- # x722=() 00:15:47.421 11:53:39 -- nvmf/common.sh@296 -- # local -ga x722 00:15:47.421 11:53:39 -- nvmf/common.sh@297 -- # mlx=() 00:15:47.421 11:53:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:47.421 11:53:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.421 11:53:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:47.421 11:53:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:47.421 11:53:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:47.421 11:53:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:47.421 11:53:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:47.421 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:47.421 11:53:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:47.421 11:53:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:47.421 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:47.421 11:53:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:47.421 11:53:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:47.421 11:53:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.421 11:53:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:47.421 11:53:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.421 11:53:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:47.421 Found net devices under 0000:31:00.0: cvl_0_0 00:15:47.421 11:53:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.421 11:53:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:47.421 11:53:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.421 11:53:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:47.421 11:53:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.421 11:53:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:47.421 Found net devices under 0000:31:00.1: cvl_0_1 00:15:47.421 11:53:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.421 11:53:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:47.421 11:53:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:47.421 11:53:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:47.421 11:53:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:47.421 11:53:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.421 11:53:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.421 11:53:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.421 11:53:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:47.421 11:53:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.422 11:53:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.422 11:53:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:47.422 11:53:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.422 11:53:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.422 11:53:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:47.422 11:53:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:47.422 11:53:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.422 11:53:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.422 11:53:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.422 11:53:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.422 11:53:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:47.422 11:53:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.422 11:53:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.422 11:53:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.422 11:53:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:47.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:15:47.422 00:15:47.422 --- 10.0.0.2 ping statistics --- 00:15:47.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.422 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:15:47.422 11:53:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:15:47.422 00:15:47.422 --- 10.0.0.1 ping statistics --- 00:15:47.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.422 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:15:47.422 11:53:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.422 11:53:39 -- nvmf/common.sh@410 -- # return 0 00:15:47.422 11:53:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:47.422 11:53:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.422 11:53:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:47.422 11:53:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:47.422 11:53:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.422 11:53:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:47.422 11:53:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:47.422 11:53:40 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:47.422 11:53:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:47.422 11:53:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 11:53:40 -- nvmf/common.sh@469 -- # nvmfpid=1890969 00:15:47.422 11:53:40 -- nvmf/common.sh@470 -- # waitforlisten 1890969 00:15:47.422 11:53:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:47.422 11:53:40 -- common/autotest_common.sh@819 -- # '[' -z 1890969 ']' 00:15:47.422 11:53:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.422 11:53:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:47.422 11:53:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.422 11:53:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 [2024-06-10 11:53:40.079509] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:47.422 [2024-06-10 11:53:40.079564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.422 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.422 [2024-06-10 11:53:40.148948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.422 [2024-06-10 11:53:40.216183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:47.422 [2024-06-10 11:53:40.216325] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.422 [2024-06-10 11:53:40.216335] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.422 [2024-06-10 11:53:40.216343] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.422 [2024-06-10 11:53:40.216480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.422 [2024-06-10 11:53:40.216589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.422 [2024-06-10 11:53:40.216737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.422 [2024-06-10 11:53:40.216738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.422 11:53:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:47.422 11:53:40 -- common/autotest_common.sh@852 -- # return 0 00:15:47.422 11:53:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:47.422 11:53:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 11:53:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.422 11:53:40 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:47.422 11:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 [2024-06-10 11:53:40.888437] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.422 11:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.422 11:53:40 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:47.422 11:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 Malloc0 00:15:47.422 11:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.422 11:53:40 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:47.422 11:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 Malloc1 00:15:47.422 11:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.422 11:53:40 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:47.422 11:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 11:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.422 11:53:40 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:47.422 11:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 11:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.422 11:53:40 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:47.422 11:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 11:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.422 11:53:40 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.422 11:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 [2024-06-10 11:53:40.974359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.422 11:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.422 11:53:40 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:47.422 11:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.422 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 11:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.422 11:53:40 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:47.422 00:15:47.422 Discovery Log Number of Records 2, Generation counter 2 00:15:47.422 =====Discovery Log Entry 0====== 00:15:47.422 trtype: tcp 00:15:47.422 adrfam: ipv4 00:15:47.422 subtype: current discovery subsystem 00:15:47.422 treq: not required 00:15:47.422 portid: 0 00:15:47.422 trsvcid: 4420 00:15:47.422 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:47.422 traddr: 10.0.0.2 00:15:47.422 eflags: explicit discovery connections, duplicate discovery information 00:15:47.422 sectype: none 00:15:47.422 =====Discovery Log Entry 1====== 00:15:47.422 trtype: tcp 00:15:47.422 adrfam: ipv4 00:15:47.422 subtype: nvme subsystem 00:15:47.422 treq: not required 00:15:47.422 portid: 0 00:15:47.422 trsvcid: 4420 00:15:47.422 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:47.422 traddr: 10.0.0.2 00:15:47.422 eflags: none 00:15:47.422 sectype: none 00:15:47.422 11:53:41 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:47.422 11:53:41 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:47.422 11:53:41 -- nvmf/common.sh@510 -- # local dev _ 00:15:47.422 11:53:41 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.422 11:53:41 -- nvmf/common.sh@509 -- # nvme list 00:15:47.422 11:53:41 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:47.422 11:53:41 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.422 11:53:41 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:47.422 11:53:41 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.423 11:53:41 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:47.423 11:53:41 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:48.806 11:53:42 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:48.806 11:53:42 -- common/autotest_common.sh@1177 -- # local i=0 00:15:48.806 11:53:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.806 11:53:42 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:48.806 11:53:42 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:48.806 11:53:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:51.352 11:53:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:51.352 11:53:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:51.352 11:53:44 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:51.352 11:53:44 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:51.352 11:53:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:51.352 11:53:44 -- common/autotest_common.sh@1187 -- # return 0 00:15:51.352 11:53:44 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:51.352 11:53:44 -- nvmf/common.sh@510 -- # local dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@509 -- # nvme list 00:15:51.352 11:53:44 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:51.352 11:53:44 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:51.352 11:53:44 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:51.352 /dev/nvme0n1 ]] 00:15:51.352 11:53:44 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:51.352 11:53:44 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:51.352 11:53:44 -- nvmf/common.sh@510 -- # local dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@509 -- # nvme list 00:15:51.352 11:53:44 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:51.352 11:53:44 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:51.352 11:53:44 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:51.352 11:53:44 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:51.352 11:53:44 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:51.352 11:53:44 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.613 11:53:45 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:51.613 11:53:45 -- common/autotest_common.sh@1198 -- # local i=0 00:15:51.613 11:53:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:51.613 11:53:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.613 11:53:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:51.613 11:53:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.613 11:53:45 -- common/autotest_common.sh@1210 -- # return 0 00:15:51.613 11:53:45 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:51.613 11:53:45 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.613 11:53:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.613 11:53:45 -- common/autotest_common.sh@10 -- # set +x 00:15:51.613 11:53:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.613 11:53:45 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:51.613 11:53:45 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:51.613 11:53:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:51.613 11:53:45 -- nvmf/common.sh@116 -- # sync 00:15:51.613 11:53:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:51.613 11:53:45 -- nvmf/common.sh@119 -- # set +e 00:15:51.613 11:53:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:51.613 11:53:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:51.613 rmmod nvme_tcp 00:15:51.613 rmmod nvme_fabrics 00:15:51.613 rmmod nvme_keyring 00:15:51.613 11:53:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:51.613 11:53:45 -- nvmf/common.sh@123 -- # set -e 00:15:51.613 11:53:45 -- nvmf/common.sh@124 -- # return 0 00:15:51.613 11:53:45 -- nvmf/common.sh@477 -- # '[' -n 1890969 ']' 00:15:51.613 11:53:45 -- nvmf/common.sh@478 -- # killprocess 1890969 00:15:51.613 11:53:45 -- common/autotest_common.sh@926 -- # '[' -z 1890969 ']' 00:15:51.613 11:53:45 -- common/autotest_common.sh@930 -- # kill -0 1890969 00:15:51.613 11:53:45 -- common/autotest_common.sh@931 -- # uname 00:15:51.613 11:53:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:51.613 11:53:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1890969 00:15:51.613 11:53:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:51.613 11:53:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:51.613 11:53:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1890969' 00:15:51.613 killing process with pid 1890969 00:15:51.613 11:53:45 -- common/autotest_common.sh@945 -- # kill 1890969 00:15:51.613 11:53:45 -- common/autotest_common.sh@950 -- # wait 1890969 00:15:51.874 11:53:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:51.874 11:53:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:51.874 11:53:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:51.874 11:53:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.874 11:53:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:51.874 11:53:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.874 11:53:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.874 11:53:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.788 11:53:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:53.788 00:15:53.788 real 0m14.932s 00:15:53.788 user 0m22.958s 00:15:53.788 sys 0m5.910s 00:15:53.788 11:53:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.788 11:53:47 -- common/autotest_common.sh@10 -- # set +x 00:15:53.788 ************************************ 00:15:53.788 END TEST nvmf_nvme_cli 00:15:53.788 ************************************ 00:15:53.788 11:53:47 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:15:53.788 11:53:47 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:53.788 11:53:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:53.788 11:53:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:53.788 11:53:47 -- common/autotest_common.sh@10 -- # set +x 00:15:54.049 ************************************ 00:15:54.049 START TEST nvmf_host_management 00:15:54.049 ************************************ 00:15:54.049 11:53:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:54.049 * Looking for test storage... 00:15:54.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.050 11:53:47 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.050 11:53:47 -- nvmf/common.sh@7 -- # uname -s 00:15:54.050 11:53:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.050 11:53:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.050 11:53:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.050 11:53:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.050 11:53:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.050 11:53:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.050 11:53:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.050 11:53:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.050 11:53:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.050 11:53:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.050 11:53:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:54.050 11:53:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:54.050 11:53:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.050 11:53:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.050 11:53:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.050 11:53:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.050 11:53:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.050 11:53:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.050 11:53:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.050 11:53:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.050 11:53:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.050 11:53:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.050 11:53:47 -- paths/export.sh@5 -- # export PATH 00:15:54.050 11:53:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.050 11:53:47 -- nvmf/common.sh@46 -- # : 0 00:15:54.050 11:53:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:54.050 11:53:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:54.050 11:53:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:54.050 11:53:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.050 11:53:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.050 11:53:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:54.050 11:53:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:54.050 11:53:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:54.050 11:53:47 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.050 11:53:47 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.050 11:53:47 -- target/host_management.sh@104 -- # nvmftestinit 00:15:54.050 11:53:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:54.050 11:53:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.050 11:53:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:54.050 11:53:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:54.050 11:53:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:54.050 11:53:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.050 11:53:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.050 11:53:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.050 11:53:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:54.050 11:53:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:54.050 11:53:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:54.050 11:53:47 -- common/autotest_common.sh@10 -- # set +x 00:16:02.198 11:53:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:02.198 11:53:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:02.198 11:53:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:02.198 11:53:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:02.198 11:53:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:02.198 11:53:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:02.198 11:53:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:02.198 11:53:54 -- nvmf/common.sh@294 -- # net_devs=() 00:16:02.198 11:53:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:02.198 11:53:54 -- nvmf/common.sh@295 -- # e810=() 00:16:02.198 11:53:54 -- nvmf/common.sh@295 -- # local -ga e810 00:16:02.198 11:53:54 -- nvmf/common.sh@296 -- # x722=() 00:16:02.198 11:53:54 -- nvmf/common.sh@296 -- # local -ga x722 00:16:02.198 11:53:54 -- nvmf/common.sh@297 -- # mlx=() 00:16:02.198 11:53:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:02.198 11:53:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.198 11:53:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:02.198 11:53:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:02.198 11:53:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:02.198 11:53:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:02.198 11:53:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:02.198 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:02.198 11:53:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:02.198 11:53:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:02.198 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:02.198 11:53:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:02.198 11:53:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:02.198 11:53:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.198 11:53:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:02.198 11:53:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.198 11:53:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:02.198 Found net devices under 0000:31:00.0: cvl_0_0 00:16:02.198 11:53:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.198 11:53:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:02.198 11:53:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.198 11:53:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:02.198 11:53:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.198 11:53:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:02.198 Found net devices under 0000:31:00.1: cvl_0_1 00:16:02.198 11:53:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.198 11:53:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:02.198 11:53:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:02.198 11:53:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:02.198 11:53:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:02.198 11:53:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.198 11:53:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.198 11:53:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.198 11:53:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:02.198 11:53:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.198 11:53:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.198 11:53:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:02.198 11:53:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.198 11:53:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.198 11:53:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:02.198 11:53:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:02.198 11:53:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.198 11:53:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.198 11:53:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.198 11:53:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.198 11:53:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:02.198 11:53:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.198 11:53:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.198 11:53:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.198 11:53:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:02.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:16:02.198 00:16:02.198 --- 10.0.0.2 ping statistics --- 00:16:02.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.198 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:16:02.198 11:53:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:16:02.198 00:16:02.198 --- 10.0.0.1 ping statistics --- 00:16:02.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.198 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:16:02.198 11:53:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.198 11:53:55 -- nvmf/common.sh@410 -- # return 0 00:16:02.198 11:53:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:02.198 11:53:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.198 11:53:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:02.198 11:53:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:02.198 11:53:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.198 11:53:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:02.198 11:53:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:02.198 11:53:55 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:02.198 11:53:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:02.198 11:53:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:02.198 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:02.198 ************************************ 00:16:02.198 START TEST nvmf_host_management 00:16:02.198 ************************************ 00:16:02.198 11:53:55 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:02.198 11:53:55 -- target/host_management.sh@69 -- # starttarget 00:16:02.198 11:53:55 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:02.198 11:53:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:02.198 11:53:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:02.198 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:02.198 11:53:55 -- nvmf/common.sh@469 -- # nvmfpid=1896228 00:16:02.198 11:53:55 -- nvmf/common.sh@470 -- # waitforlisten 1896228 00:16:02.198 11:53:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:02.198 11:53:55 -- common/autotest_common.sh@819 -- # '[' -z 1896228 ']' 00:16:02.198 11:53:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.198 11:53:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:02.198 11:53:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.198 11:53:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:02.199 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:02.199 [2024-06-10 11:53:55.111821] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:02.199 [2024-06-10 11:53:55.111883] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.199 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.199 [2024-06-10 11:53:55.199940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:02.199 [2024-06-10 11:53:55.293080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:02.199 [2024-06-10 11:53:55.293235] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.199 [2024-06-10 11:53:55.293260] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.199 [2024-06-10 11:53:55.293275] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.199 [2024-06-10 11:53:55.293406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.199 [2024-06-10 11:53:55.293583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:02.199 [2024-06-10 11:53:55.293749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:02.199 [2024-06-10 11:53:55.293751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.199 11:53:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:02.199 11:53:55 -- common/autotest_common.sh@852 -- # return 0 00:16:02.199 11:53:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:02.199 11:53:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:02.199 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:02.199 11:53:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.199 11:53:55 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:02.199 11:53:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:02.199 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:02.199 [2024-06-10 11:53:55.930280] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.199 11:53:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:02.199 11:53:55 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:02.199 11:53:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:02.199 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:02.199 11:53:55 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:02.199 11:53:55 -- target/host_management.sh@23 -- # cat 00:16:02.199 11:53:55 -- target/host_management.sh@30 -- # rpc_cmd 00:16:02.199 11:53:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:02.199 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:02.460 Malloc0 00:16:02.460 [2024-06-10 11:53:55.989580] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.460 11:53:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:02.460 11:53:56 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:02.460 11:53:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:02.460 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:16:02.460 11:53:56 -- target/host_management.sh@73 -- # perfpid=1896492 00:16:02.460 11:53:56 -- target/host_management.sh@74 -- # waitforlisten 1896492 /var/tmp/bdevperf.sock 00:16:02.460 11:53:56 -- common/autotest_common.sh@819 -- # '[' -z 1896492 ']' 00:16:02.460 11:53:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:02.460 11:53:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:02.460 11:53:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:02.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:02.460 11:53:56 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:02.460 11:53:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:02.460 11:53:56 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:02.460 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:16:02.460 11:53:56 -- nvmf/common.sh@520 -- # config=() 00:16:02.460 11:53:56 -- nvmf/common.sh@520 -- # local subsystem config 00:16:02.460 11:53:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:02.460 11:53:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:02.460 { 00:16:02.460 "params": { 00:16:02.460 "name": "Nvme$subsystem", 00:16:02.460 "trtype": "$TEST_TRANSPORT", 00:16:02.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:02.460 "adrfam": "ipv4", 00:16:02.460 "trsvcid": "$NVMF_PORT", 00:16:02.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:02.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:02.460 "hdgst": ${hdgst:-false}, 00:16:02.460 "ddgst": ${ddgst:-false} 00:16:02.460 }, 00:16:02.460 "method": "bdev_nvme_attach_controller" 00:16:02.460 } 00:16:02.460 EOF 00:16:02.460 )") 00:16:02.460 11:53:56 -- nvmf/common.sh@542 -- # cat 00:16:02.460 11:53:56 -- nvmf/common.sh@544 -- # jq . 00:16:02.460 11:53:56 -- nvmf/common.sh@545 -- # IFS=, 00:16:02.460 11:53:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:02.460 "params": { 00:16:02.460 "name": "Nvme0", 00:16:02.460 "trtype": "tcp", 00:16:02.460 "traddr": "10.0.0.2", 00:16:02.460 "adrfam": "ipv4", 00:16:02.460 "trsvcid": "4420", 00:16:02.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:02.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:02.460 "hdgst": false, 00:16:02.460 "ddgst": false 00:16:02.460 }, 00:16:02.460 "method": "bdev_nvme_attach_controller" 00:16:02.460 }' 00:16:02.460 [2024-06-10 11:53:56.093528] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:02.460 [2024-06-10 11:53:56.093597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896492 ] 00:16:02.460 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.460 [2024-06-10 11:53:56.153996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.460 [2024-06-10 11:53:56.216436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.721 Running I/O for 10 seconds... 00:16:03.295 11:53:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:03.295 11:53:56 -- common/autotest_common.sh@852 -- # return 0 00:16:03.295 11:53:56 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:03.295 11:53:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.295 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:16:03.295 11:53:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.295 11:53:56 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:03.295 11:53:56 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:03.295 11:53:56 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:03.295 11:53:56 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:03.295 11:53:56 -- target/host_management.sh@52 -- # local ret=1 00:16:03.295 11:53:56 -- target/host_management.sh@53 -- # local i 00:16:03.295 11:53:56 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:03.295 11:53:56 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:03.295 11:53:56 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:03.295 11:53:56 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:03.295 11:53:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.295 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:16:03.295 11:53:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.295 11:53:56 -- target/host_management.sh@55 -- # read_io_count=1223 00:16:03.295 11:53:56 -- target/host_management.sh@58 -- # '[' 1223 -ge 100 ']' 00:16:03.295 11:53:56 -- target/host_management.sh@59 -- # ret=0 00:16:03.295 11:53:56 -- target/host_management.sh@60 -- # break 00:16:03.295 11:53:56 -- target/host_management.sh@64 -- # return 0 00:16:03.295 11:53:56 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:03.295 11:53:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.295 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:16:03.295 [2024-06-10 11:53:56.920661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.295 [2024-06-10 11:53:56.920988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.920995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363cb0 is same with the state(5) to be set 00:16:03.296 [2024-06-10 11:53:56.921397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.296 [2024-06-10 11:53:56.921887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.296 [2024-06-10 11:53:56.921897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.921905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.921916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.921925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.921934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.921942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.921953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.921960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.921970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.921977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.921987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.921993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.297 [2024-06-10 11:53:56.922511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.297 [2024-06-10 11:53:56.922519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.298 [2024-06-10 11:53:56.922527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:03.298 [2024-06-10 11:53:56.922535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.298 [2024-06-10 11:53:56.922543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ed110 is same with the state(5) to be set 00:16:03.298 [2024-06-10 11:53:56.922585] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ed110 was disconnected and freed. reset controller. 00:16:03.298 [2024-06-10 11:53:56.923774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:03.298 task offset: 39168 on job bdev=Nvme0n1 fails 00:16:03.298 00:16:03.298 Latency(us) 00:16:03.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.298 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:03.298 Job: Nvme0n1 ended in about 0.53 seconds with error 00:16:03.298 Verification LBA range: start 0x0 length 0x400 00:16:03.298 Nvme0n1 : 0.53 2482.44 155.15 121.28 0.00 24221.56 6662.83 27415.89 00:16:03.298 =================================================================================================================== 00:16:03.298 Total : 2482.44 155.15 121.28 0.00 24221.56 6662.83 27415.89 00:16:03.298 11:53:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.298 [2024-06-10 11:53:56.925766] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:03.298 [2024-06-10 11:53:56.925790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ef450 (9): Bad file descriptor 00:16:03.298 11:53:56 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:03.298 11:53:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.298 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:16:03.298 [2024-06-10 11:53:56.932305] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:03.298 [2024-06-10 11:53:56.932402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:03.298 [2024-06-10 11:53:56.932431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.298 [2024-06-10 11:53:56.932447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:03.298 [2024-06-10 11:53:56.932456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:03.298 [2024-06-10 11:53:56.932463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:03.298 [2024-06-10 11:53:56.932470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21ef450 00:16:03.298 [2024-06-10 11:53:56.932490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ef450 (9): Bad file descriptor 00:16:03.298 [2024-06-10 11:53:56.932502] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:03.298 [2024-06-10 11:53:56.932509] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:03.298 [2024-06-10 11:53:56.932517] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:03.298 [2024-06-10 11:53:56.932530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:03.298 11:53:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.298 11:53:56 -- target/host_management.sh@87 -- # sleep 1 00:16:04.254 11:53:57 -- target/host_management.sh@91 -- # kill -9 1896492 00:16:04.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1896492) - No such process 00:16:04.254 11:53:57 -- target/host_management.sh@91 -- # true 00:16:04.254 11:53:57 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:04.254 11:53:57 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:04.254 11:53:57 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:04.254 11:53:57 -- nvmf/common.sh@520 -- # config=() 00:16:04.254 11:53:57 -- nvmf/common.sh@520 -- # local subsystem config 00:16:04.254 11:53:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:04.254 11:53:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:04.254 { 00:16:04.254 "params": { 00:16:04.254 "name": "Nvme$subsystem", 00:16:04.254 "trtype": "$TEST_TRANSPORT", 00:16:04.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:04.254 "adrfam": "ipv4", 00:16:04.254 "trsvcid": "$NVMF_PORT", 00:16:04.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:04.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:04.254 "hdgst": ${hdgst:-false}, 00:16:04.254 "ddgst": ${ddgst:-false} 00:16:04.254 }, 00:16:04.254 "method": "bdev_nvme_attach_controller" 00:16:04.254 } 00:16:04.254 EOF 00:16:04.254 )") 00:16:04.254 11:53:57 -- nvmf/common.sh@542 -- # cat 00:16:04.254 11:53:57 -- nvmf/common.sh@544 -- # jq . 00:16:04.254 11:53:57 -- nvmf/common.sh@545 -- # IFS=, 00:16:04.254 11:53:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:04.254 "params": { 00:16:04.254 "name": "Nvme0", 00:16:04.254 "trtype": "tcp", 00:16:04.254 "traddr": "10.0.0.2", 00:16:04.254 "adrfam": "ipv4", 00:16:04.254 "trsvcid": "4420", 00:16:04.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:04.254 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:04.254 "hdgst": false, 00:16:04.254 "ddgst": false 00:16:04.254 }, 00:16:04.254 "method": "bdev_nvme_attach_controller" 00:16:04.254 }' 00:16:04.254 [2024-06-10 11:53:57.990992] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:04.254 [2024-06-10 11:53:57.991047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896851 ] 00:16:04.254 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.514 [2024-06-10 11:53:58.050452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.514 [2024-06-10 11:53:58.112661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.775 Running I/O for 1 seconds... 00:16:05.717 00:16:05.717 Latency(us) 00:16:05.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.717 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:05.717 Verification LBA range: start 0x0 length 0x400 00:16:05.717 Nvme0n1 : 1.01 4162.07 260.13 0.00 0.00 15125.03 1372.16 19333.12 00:16:05.717 =================================================================================================================== 00:16:05.717 Total : 4162.07 260.13 0.00 0.00 15125.03 1372.16 19333.12 00:16:05.717 11:53:59 -- target/host_management.sh@101 -- # stoptarget 00:16:05.717 11:53:59 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:05.717 11:53:59 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:05.717 11:53:59 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:05.717 11:53:59 -- target/host_management.sh@40 -- # nvmftestfini 00:16:05.717 11:53:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:05.717 11:53:59 -- nvmf/common.sh@116 -- # sync 00:16:05.717 11:53:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:05.717 11:53:59 -- nvmf/common.sh@119 -- # set +e 00:16:05.717 11:53:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:05.718 11:53:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:05.718 rmmod nvme_tcp 00:16:05.718 rmmod nvme_fabrics 00:16:05.718 rmmod nvme_keyring 00:16:05.977 11:53:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:05.977 11:53:59 -- nvmf/common.sh@123 -- # set -e 00:16:05.977 11:53:59 -- nvmf/common.sh@124 -- # return 0 00:16:05.977 11:53:59 -- nvmf/common.sh@477 -- # '[' -n 1896228 ']' 00:16:05.977 11:53:59 -- nvmf/common.sh@478 -- # killprocess 1896228 00:16:05.977 11:53:59 -- common/autotest_common.sh@926 -- # '[' -z 1896228 ']' 00:16:05.977 11:53:59 -- common/autotest_common.sh@930 -- # kill -0 1896228 00:16:05.977 11:53:59 -- common/autotest_common.sh@931 -- # uname 00:16:05.977 11:53:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.977 11:53:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1896228 00:16:05.977 11:53:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:05.977 11:53:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:05.977 11:53:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1896228' 00:16:05.977 killing process with pid 1896228 00:16:05.977 11:53:59 -- common/autotest_common.sh@945 -- # kill 1896228 00:16:05.977 11:53:59 -- common/autotest_common.sh@950 -- # wait 1896228 00:16:05.977 [2024-06-10 11:53:59.664151] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:05.977 11:53:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:05.977 11:53:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:05.977 11:53:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:05.977 11:53:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.978 11:53:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:05.978 11:53:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.978 11:53:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.978 11:53:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.546 11:54:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:08.546 00:16:08.546 real 0m6.699s 00:16:08.546 user 0m19.890s 00:16:08.546 sys 0m1.086s 00:16:08.546 11:54:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.546 11:54:01 -- common/autotest_common.sh@10 -- # set +x 00:16:08.546 ************************************ 00:16:08.546 END TEST nvmf_host_management 00:16:08.546 ************************************ 00:16:08.546 11:54:01 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:08.546 00:16:08.546 real 0m14.233s 00:16:08.546 user 0m21.905s 00:16:08.547 sys 0m6.535s 00:16:08.547 11:54:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.547 11:54:01 -- common/autotest_common.sh@10 -- # set +x 00:16:08.547 ************************************ 00:16:08.547 END TEST nvmf_host_management 00:16:08.547 ************************************ 00:16:08.547 11:54:01 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:08.547 11:54:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:08.547 11:54:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:08.547 11:54:01 -- common/autotest_common.sh@10 -- # set +x 00:16:08.547 ************************************ 00:16:08.547 START TEST nvmf_lvol 00:16:08.547 ************************************ 00:16:08.547 11:54:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:08.547 * Looking for test storage... 00:16:08.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.547 11:54:01 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.547 11:54:01 -- nvmf/common.sh@7 -- # uname -s 00:16:08.547 11:54:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.547 11:54:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.547 11:54:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.547 11:54:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.547 11:54:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.547 11:54:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.547 11:54:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.547 11:54:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.547 11:54:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.547 11:54:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.547 11:54:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.547 11:54:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.547 11:54:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.547 11:54:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.547 11:54:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.547 11:54:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.547 11:54:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.547 11:54:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.547 11:54:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.547 11:54:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.547 11:54:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.547 11:54:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.547 11:54:01 -- paths/export.sh@5 -- # export PATH 00:16:08.547 11:54:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.547 11:54:01 -- nvmf/common.sh@46 -- # : 0 00:16:08.547 11:54:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:08.547 11:54:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:08.547 11:54:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:08.547 11:54:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.547 11:54:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.547 11:54:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:08.547 11:54:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:08.547 11:54:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:08.547 11:54:01 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.547 11:54:01 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.547 11:54:01 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:08.547 11:54:01 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:08.547 11:54:01 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.547 11:54:01 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:08.547 11:54:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:08.547 11:54:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.547 11:54:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:08.547 11:54:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:08.547 11:54:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:08.548 11:54:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.548 11:54:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.548 11:54:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.548 11:54:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:08.548 11:54:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:08.548 11:54:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:08.548 11:54:01 -- common/autotest_common.sh@10 -- # set +x 00:16:15.229 11:54:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:15.229 11:54:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:15.229 11:54:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:15.229 11:54:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:15.229 11:54:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:15.229 11:54:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:15.229 11:54:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:15.229 11:54:08 -- nvmf/common.sh@294 -- # net_devs=() 00:16:15.229 11:54:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:15.229 11:54:08 -- nvmf/common.sh@295 -- # e810=() 00:16:15.229 11:54:08 -- nvmf/common.sh@295 -- # local -ga e810 00:16:15.229 11:54:08 -- nvmf/common.sh@296 -- # x722=() 00:16:15.229 11:54:08 -- nvmf/common.sh@296 -- # local -ga x722 00:16:15.229 11:54:08 -- nvmf/common.sh@297 -- # mlx=() 00:16:15.229 11:54:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:15.229 11:54:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.229 11:54:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:15.229 11:54:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:15.229 11:54:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:15.229 11:54:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:15.229 11:54:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:15.229 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:15.229 11:54:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:15.229 11:54:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:15.229 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:15.229 11:54:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:15.229 11:54:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:15.229 11:54:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.229 11:54:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:15.229 11:54:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.229 11:54:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:15.229 Found net devices under 0000:31:00.0: cvl_0_0 00:16:15.229 11:54:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.229 11:54:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:15.229 11:54:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.229 11:54:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:15.229 11:54:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.229 11:54:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:15.229 Found net devices under 0000:31:00.1: cvl_0_1 00:16:15.229 11:54:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.229 11:54:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:15.229 11:54:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:15.229 11:54:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:15.229 11:54:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:15.229 11:54:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.229 11:54:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.229 11:54:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.229 11:54:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:15.229 11:54:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.229 11:54:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.229 11:54:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:15.229 11:54:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.229 11:54:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.229 11:54:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:15.229 11:54:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:15.229 11:54:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.229 11:54:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.490 11:54:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.490 11:54:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.490 11:54:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:15.490 11:54:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.752 11:54:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.752 11:54:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.752 11:54:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:15.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:16:15.752 00:16:15.752 --- 10.0.0.2 ping statistics --- 00:16:15.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.752 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:16:15.752 11:54:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:16:15.752 00:16:15.752 --- 10.0.0.1 ping statistics --- 00:16:15.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.752 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:16:15.752 11:54:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.752 11:54:09 -- nvmf/common.sh@410 -- # return 0 00:16:15.752 11:54:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:15.752 11:54:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.752 11:54:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:15.752 11:54:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:15.752 11:54:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.752 11:54:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:15.752 11:54:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:15.752 11:54:09 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:15.752 11:54:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:15.752 11:54:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:15.752 11:54:09 -- common/autotest_common.sh@10 -- # set +x 00:16:15.752 11:54:09 -- nvmf/common.sh@469 -- # nvmfpid=1901843 00:16:15.752 11:54:09 -- nvmf/common.sh@470 -- # waitforlisten 1901843 00:16:15.752 11:54:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:15.752 11:54:09 -- common/autotest_common.sh@819 -- # '[' -z 1901843 ']' 00:16:15.752 11:54:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.752 11:54:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:15.752 11:54:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.752 11:54:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:15.752 11:54:09 -- common/autotest_common.sh@10 -- # set +x 00:16:15.752 [2024-06-10 11:54:09.404998] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:15.752 [2024-06-10 11:54:09.405070] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.752 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.752 [2024-06-10 11:54:09.476304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:16.014 [2024-06-10 11:54:09.549211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.014 [2024-06-10 11:54:09.549342] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.014 [2024-06-10 11:54:09.549350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.014 [2024-06-10 11:54:09.549358] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.014 [2024-06-10 11:54:09.549505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.014 [2024-06-10 11:54:09.549700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.014 [2024-06-10 11:54:09.549704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.586 11:54:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:16.586 11:54:10 -- common/autotest_common.sh@852 -- # return 0 00:16:16.586 11:54:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:16.586 11:54:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:16.586 11:54:10 -- common/autotest_common.sh@10 -- # set +x 00:16:16.586 11:54:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.586 11:54:10 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:16.586 [2024-06-10 11:54:10.338199] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.846 11:54:10 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:16.846 11:54:10 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:16.846 11:54:10 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:17.107 11:54:10 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:17.107 11:54:10 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:17.107 11:54:10 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:17.368 11:54:11 -- target/nvmf_lvol.sh@29 -- # lvs=8fa4cdb1-a948-4efb-b765-992cd8540d94 00:16:17.368 11:54:11 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8fa4cdb1-a948-4efb-b765-992cd8540d94 lvol 20 00:16:17.718 11:54:11 -- target/nvmf_lvol.sh@32 -- # lvol=08a5854b-c2d2-4b2b-880d-80e29f841a68 00:16:17.718 11:54:11 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:17.718 11:54:11 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08a5854b-c2d2-4b2b-880d-80e29f841a68 00:16:17.979 11:54:11 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:17.979 [2024-06-10 11:54:11.641862] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.979 11:54:11 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:18.240 11:54:11 -- target/nvmf_lvol.sh@42 -- # perf_pid=1902537 00:16:18.240 11:54:11 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:18.240 11:54:11 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:18.240 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.182 11:54:12 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 08a5854b-c2d2-4b2b-880d-80e29f841a68 MY_SNAPSHOT 00:16:19.443 11:54:13 -- target/nvmf_lvol.sh@47 -- # snapshot=fe8d78b3-ff0e-42bb-9487-683b170a4f3e 00:16:19.443 11:54:13 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 08a5854b-c2d2-4b2b-880d-80e29f841a68 30 00:16:19.704 11:54:13 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fe8d78b3-ff0e-42bb-9487-683b170a4f3e MY_CLONE 00:16:19.704 11:54:13 -- target/nvmf_lvol.sh@49 -- # clone=94697ff9-ef64-48c2-9629-7a93a8da6c1f 00:16:19.704 11:54:13 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 94697ff9-ef64-48c2-9629-7a93a8da6c1f 00:16:20.274 11:54:13 -- target/nvmf_lvol.sh@53 -- # wait 1902537 00:16:30.274 Initializing NVMe Controllers 00:16:30.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:30.274 Controller IO queue size 128, less than required. 00:16:30.274 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:30.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:30.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:30.274 Initialization complete. Launching workers. 00:16:30.274 ======================================================== 00:16:30.274 Latency(us) 00:16:30.274 Device Information : IOPS MiB/s Average min max 00:16:30.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 18026.60 70.42 7102.12 531.66 48157.83 00:16:30.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12387.90 48.39 10334.09 3768.54 49063.13 00:16:30.274 ======================================================== 00:16:30.274 Total : 30414.50 118.81 8418.51 531.66 49063.13 00:16:30.274 00:16:30.274 11:54:22 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:30.274 11:54:22 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 08a5854b-c2d2-4b2b-880d-80e29f841a68 00:16:30.274 11:54:22 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fa4cdb1-a948-4efb-b765-992cd8540d94 00:16:30.274 11:54:22 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:30.274 11:54:22 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:30.274 11:54:22 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:30.274 11:54:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:30.274 11:54:22 -- nvmf/common.sh@116 -- # sync 00:16:30.274 11:54:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:30.274 11:54:22 -- nvmf/common.sh@119 -- # set +e 00:16:30.274 11:54:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:30.274 11:54:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:30.274 rmmod nvme_tcp 00:16:30.274 rmmod nvme_fabrics 00:16:30.274 rmmod nvme_keyring 00:16:30.274 11:54:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:30.274 11:54:22 -- nvmf/common.sh@123 -- # set -e 00:16:30.274 11:54:22 -- nvmf/common.sh@124 -- # return 0 00:16:30.274 11:54:22 -- nvmf/common.sh@477 -- # '[' -n 1901843 ']' 00:16:30.274 11:54:22 -- nvmf/common.sh@478 -- # killprocess 1901843 00:16:30.274 11:54:22 -- common/autotest_common.sh@926 -- # '[' -z 1901843 ']' 00:16:30.274 11:54:22 -- common/autotest_common.sh@930 -- # kill -0 1901843 00:16:30.274 11:54:22 -- common/autotest_common.sh@931 -- # uname 00:16:30.274 11:54:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:30.274 11:54:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1901843 00:16:30.274 11:54:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:30.274 11:54:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:30.274 11:54:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1901843' 00:16:30.274 killing process with pid 1901843 00:16:30.274 11:54:22 -- common/autotest_common.sh@945 -- # kill 1901843 00:16:30.274 11:54:22 -- common/autotest_common.sh@950 -- # wait 1901843 00:16:30.274 11:54:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:30.274 11:54:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:30.274 11:54:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:30.274 11:54:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.274 11:54:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:30.274 11:54:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.274 11:54:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.274 11:54:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.659 11:54:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:31.659 00:16:31.659 real 0m23.235s 00:16:31.659 user 1m2.847s 00:16:31.659 sys 0m8.032s 00:16:31.659 11:54:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.659 11:54:25 -- common/autotest_common.sh@10 -- # set +x 00:16:31.659 ************************************ 00:16:31.659 END TEST nvmf_lvol 00:16:31.659 ************************************ 00:16:31.659 11:54:25 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:31.659 11:54:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:31.659 11:54:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:31.659 11:54:25 -- common/autotest_common.sh@10 -- # set +x 00:16:31.659 ************************************ 00:16:31.659 START TEST nvmf_lvs_grow 00:16:31.659 ************************************ 00:16:31.659 11:54:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:31.659 * Looking for test storage... 00:16:31.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.659 11:54:25 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.659 11:54:25 -- nvmf/common.sh@7 -- # uname -s 00:16:31.659 11:54:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.659 11:54:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.659 11:54:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.659 11:54:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.659 11:54:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.659 11:54:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.660 11:54:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.660 11:54:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.660 11:54:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.660 11:54:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.660 11:54:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.660 11:54:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.660 11:54:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.660 11:54:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.660 11:54:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.660 11:54:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.660 11:54:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.660 11:54:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.660 11:54:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.660 11:54:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.660 11:54:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.660 11:54:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.660 11:54:25 -- paths/export.sh@5 -- # export PATH 00:16:31.660 11:54:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.660 11:54:25 -- nvmf/common.sh@46 -- # : 0 00:16:31.660 11:54:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:31.660 11:54:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:31.660 11:54:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:31.660 11:54:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.660 11:54:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.660 11:54:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:31.660 11:54:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:31.660 11:54:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:31.660 11:54:25 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.660 11:54:25 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:31.660 11:54:25 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:31.660 11:54:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:31.660 11:54:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.660 11:54:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:31.660 11:54:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:31.660 11:54:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:31.660 11:54:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.660 11:54:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.660 11:54:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.660 11:54:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:31.660 11:54:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:31.660 11:54:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:31.660 11:54:25 -- common/autotest_common.sh@10 -- # set +x 00:16:39.807 11:54:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:39.807 11:54:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:39.807 11:54:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:39.807 11:54:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:39.807 11:54:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:39.807 11:54:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:39.807 11:54:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:39.807 11:54:32 -- nvmf/common.sh@294 -- # net_devs=() 00:16:39.807 11:54:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:39.807 11:54:32 -- nvmf/common.sh@295 -- # e810=() 00:16:39.807 11:54:32 -- nvmf/common.sh@295 -- # local -ga e810 00:16:39.807 11:54:32 -- nvmf/common.sh@296 -- # x722=() 00:16:39.807 11:54:32 -- nvmf/common.sh@296 -- # local -ga x722 00:16:39.807 11:54:32 -- nvmf/common.sh@297 -- # mlx=() 00:16:39.807 11:54:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:39.807 11:54:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.807 11:54:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:39.807 11:54:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:39.807 11:54:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:39.807 11:54:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:39.807 11:54:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:39.807 11:54:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:39.807 11:54:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.807 11:54:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:39.807 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:39.807 11:54:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.808 11:54:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:39.808 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:39.808 11:54:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:39.808 11:54:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.808 11:54:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.808 11:54:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.808 11:54:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.808 11:54:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:39.808 Found net devices under 0000:31:00.0: cvl_0_0 00:16:39.808 11:54:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.808 11:54:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.808 11:54:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.808 11:54:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.808 11:54:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.808 11:54:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:39.808 Found net devices under 0000:31:00.1: cvl_0_1 00:16:39.808 11:54:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.808 11:54:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:39.808 11:54:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:39.808 11:54:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:39.808 11:54:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.808 11:54:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.808 11:54:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.808 11:54:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:39.808 11:54:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.808 11:54:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.808 11:54:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:39.808 11:54:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.808 11:54:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.808 11:54:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:39.808 11:54:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:39.808 11:54:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.808 11:54:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.808 11:54:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.808 11:54:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.808 11:54:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:39.808 11:54:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.808 11:54:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.808 11:54:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.808 11:54:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:39.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:16:39.808 00:16:39.808 --- 10.0.0.2 ping statistics --- 00:16:39.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.808 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:16:39.808 11:54:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:16:39.808 00:16:39.808 --- 10.0.0.1 ping statistics --- 00:16:39.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.808 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:16:39.808 11:54:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.808 11:54:32 -- nvmf/common.sh@410 -- # return 0 00:16:39.808 11:54:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:39.808 11:54:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.808 11:54:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:39.808 11:54:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.808 11:54:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:39.808 11:54:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:39.808 11:54:32 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:39.808 11:54:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:39.808 11:54:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:39.808 11:54:32 -- common/autotest_common.sh@10 -- # set +x 00:16:39.808 11:54:32 -- nvmf/common.sh@469 -- # nvmfpid=1908996 00:16:39.808 11:54:32 -- nvmf/common.sh@470 -- # waitforlisten 1908996 00:16:39.808 11:54:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:39.808 11:54:32 -- common/autotest_common.sh@819 -- # '[' -z 1908996 ']' 00:16:39.808 11:54:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.808 11:54:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:39.808 11:54:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.808 11:54:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:39.808 11:54:32 -- common/autotest_common.sh@10 -- # set +x 00:16:39.808 [2024-06-10 11:54:32.536709] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:39.808 [2024-06-10 11:54:32.536769] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.808 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.808 [2024-06-10 11:54:32.606933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.808 [2024-06-10 11:54:32.679303] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:39.808 [2024-06-10 11:54:32.679425] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.808 [2024-06-10 11:54:32.679433] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.808 [2024-06-10 11:54:32.679441] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.808 [2024-06-10 11:54:32.679459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.808 11:54:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:39.808 11:54:33 -- common/autotest_common.sh@852 -- # return 0 00:16:39.808 11:54:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:39.808 11:54:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:39.808 11:54:33 -- common/autotest_common.sh@10 -- # set +x 00:16:39.808 11:54:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:39.808 [2024-06-10 11:54:33.466616] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:39.808 11:54:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:39.808 11:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.808 11:54:33 -- common/autotest_common.sh@10 -- # set +x 00:16:39.808 ************************************ 00:16:39.808 START TEST lvs_grow_clean 00:16:39.808 ************************************ 00:16:39.808 11:54:33 -- common/autotest_common.sh@1104 -- # lvs_grow 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:39.808 11:54:33 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:40.069 11:54:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:40.069 11:54:33 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:40.069 11:54:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:40.069 11:54:33 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:40.069 11:54:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:40.329 11:54:33 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:40.329 11:54:33 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:40.329 11:54:33 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9856aed6-c7df-4715-88d7-cf4bd8584741 lvol 150 00:16:40.590 11:54:34 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2c208624-e819-436a-93bd-b1591d0f8315 00:16:40.590 11:54:34 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:40.590 11:54:34 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:40.590 [2024-06-10 11:54:34.256231] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:40.590 [2024-06-10 11:54:34.256283] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:40.590 true 00:16:40.590 11:54:34 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:40.590 11:54:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:40.850 11:54:34 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:40.850 11:54:34 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:40.850 11:54:34 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2c208624-e819-436a-93bd-b1591d0f8315 00:16:41.110 11:54:34 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:41.110 [2024-06-10 11:54:34.842054] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.110 11:54:34 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:41.370 11:54:35 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1909464 00:16:41.370 11:54:35 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:41.370 11:54:35 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:41.370 11:54:35 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1909464 /var/tmp/bdevperf.sock 00:16:41.370 11:54:35 -- common/autotest_common.sh@819 -- # '[' -z 1909464 ']' 00:16:41.370 11:54:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.370 11:54:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:41.370 11:54:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.370 11:54:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:41.370 11:54:35 -- common/autotest_common.sh@10 -- # set +x 00:16:41.370 [2024-06-10 11:54:35.054862] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:41.370 [2024-06-10 11:54:35.054945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909464 ] 00:16:41.370 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.370 [2024-06-10 11:54:35.134450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.631 [2024-06-10 11:54:35.197394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.201 11:54:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:42.201 11:54:35 -- common/autotest_common.sh@852 -- # return 0 00:16:42.201 11:54:35 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:42.461 Nvme0n1 00:16:42.461 11:54:36 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:42.721 [ 00:16:42.721 { 00:16:42.721 "name": "Nvme0n1", 00:16:42.721 "aliases": [ 00:16:42.721 "2c208624-e819-436a-93bd-b1591d0f8315" 00:16:42.721 ], 00:16:42.721 "product_name": "NVMe disk", 00:16:42.721 "block_size": 4096, 00:16:42.721 "num_blocks": 38912, 00:16:42.721 "uuid": "2c208624-e819-436a-93bd-b1591d0f8315", 00:16:42.721 "assigned_rate_limits": { 00:16:42.721 "rw_ios_per_sec": 0, 00:16:42.721 "rw_mbytes_per_sec": 0, 00:16:42.721 "r_mbytes_per_sec": 0, 00:16:42.721 "w_mbytes_per_sec": 0 00:16:42.721 }, 00:16:42.721 "claimed": false, 00:16:42.721 "zoned": false, 00:16:42.721 "supported_io_types": { 00:16:42.721 "read": true, 00:16:42.721 "write": true, 00:16:42.721 "unmap": true, 00:16:42.721 "write_zeroes": true, 00:16:42.721 "flush": true, 00:16:42.721 "reset": true, 00:16:42.721 "compare": true, 00:16:42.721 "compare_and_write": true, 00:16:42.721 "abort": true, 00:16:42.721 "nvme_admin": true, 00:16:42.721 "nvme_io": true 00:16:42.721 }, 00:16:42.721 "driver_specific": { 00:16:42.721 "nvme": [ 00:16:42.721 { 00:16:42.721 "trid": { 00:16:42.721 "trtype": "TCP", 00:16:42.721 "adrfam": "IPv4", 00:16:42.721 "traddr": "10.0.0.2", 00:16:42.721 "trsvcid": "4420", 00:16:42.721 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:42.721 }, 00:16:42.721 "ctrlr_data": { 00:16:42.721 "cntlid": 1, 00:16:42.721 "vendor_id": "0x8086", 00:16:42.721 "model_number": "SPDK bdev Controller", 00:16:42.721 "serial_number": "SPDK0", 00:16:42.721 "firmware_revision": "24.01.1", 00:16:42.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.721 "oacs": { 00:16:42.721 "security": 0, 00:16:42.721 "format": 0, 00:16:42.721 "firmware": 0, 00:16:42.721 "ns_manage": 0 00:16:42.721 }, 00:16:42.721 "multi_ctrlr": true, 00:16:42.721 "ana_reporting": false 00:16:42.721 }, 00:16:42.721 "vs": { 00:16:42.721 "nvme_version": "1.3" 00:16:42.721 }, 00:16:42.721 "ns_data": { 00:16:42.721 "id": 1, 00:16:42.721 "can_share": true 00:16:42.721 } 00:16:42.721 } 00:16:42.721 ], 00:16:42.721 "mp_policy": "active_passive" 00:16:42.721 } 00:16:42.721 } 00:16:42.721 ] 00:16:42.721 11:54:36 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1909729 00:16:42.721 11:54:36 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:42.722 11:54:36 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:42.722 Running I/O for 10 seconds... 00:16:43.663 Latency(us) 00:16:43.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.663 Nvme0n1 : 1.00 18563.00 72.51 0.00 0.00 0.00 0.00 0.00 00:16:43.663 =================================================================================================================== 00:16:43.663 Total : 18563.00 72.51 0.00 0.00 0.00 0.00 0.00 00:16:43.663 00:16:44.605 11:54:38 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:44.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.865 Nvme0n1 : 2.00 18693.00 73.02 0.00 0.00 0.00 0.00 0.00 00:16:44.865 =================================================================================================================== 00:16:44.865 Total : 18693.00 73.02 0.00 0.00 0.00 0.00 0.00 00:16:44.865 00:16:44.865 true 00:16:44.865 11:54:38 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:44.865 11:54:38 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:45.126 11:54:38 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:45.126 11:54:38 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:45.126 11:54:38 -- target/nvmf_lvs_grow.sh@65 -- # wait 1909729 00:16:45.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.697 Nvme0n1 : 3.00 18731.33 73.17 0.00 0.00 0.00 0.00 0.00 00:16:45.697 =================================================================================================================== 00:16:45.697 Total : 18731.33 73.17 0.00 0.00 0.00 0.00 0.00 00:16:45.697 00:16:46.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.639 Nvme0n1 : 4.00 18738.50 73.20 0.00 0.00 0.00 0.00 0.00 00:16:46.639 =================================================================================================================== 00:16:46.639 Total : 18738.50 73.20 0.00 0.00 0.00 0.00 0.00 00:16:46.639 00:16:48.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.025 Nvme0n1 : 5.00 18778.00 73.35 0.00 0.00 0.00 0.00 0.00 00:16:48.025 =================================================================================================================== 00:16:48.025 Total : 18778.00 73.35 0.00 0.00 0.00 0.00 0.00 00:16:48.025 00:16:48.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.965 Nvme0n1 : 6.00 18795.00 73.42 0.00 0.00 0.00 0.00 0.00 00:16:48.965 =================================================================================================================== 00:16:48.965 Total : 18795.00 73.42 0.00 0.00 0.00 0.00 0.00 00:16:48.965 00:16:49.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.906 Nvme0n1 : 7.00 18816.29 73.50 0.00 0.00 0.00 0.00 0.00 00:16:49.906 =================================================================================================================== 00:16:49.906 Total : 18816.29 73.50 0.00 0.00 0.00 0.00 0.00 00:16:49.906 00:16:50.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.849 Nvme0n1 : 8.00 18832.25 73.56 0.00 0.00 0.00 0.00 0.00 00:16:50.849 =================================================================================================================== 00:16:50.849 Total : 18832.25 73.56 0.00 0.00 0.00 0.00 0.00 00:16:50.849 00:16:51.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.790 Nvme0n1 : 9.00 18846.56 73.62 0.00 0.00 0.00 0.00 0.00 00:16:51.790 =================================================================================================================== 00:16:51.790 Total : 18846.56 73.62 0.00 0.00 0.00 0.00 0.00 00:16:51.790 00:16:52.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.732 Nvme0n1 : 10.00 18861.80 73.68 0.00 0.00 0.00 0.00 0.00 00:16:52.732 =================================================================================================================== 00:16:52.732 Total : 18861.80 73.68 0.00 0.00 0.00 0.00 0.00 00:16:52.732 00:16:52.732 00:16:52.732 Latency(us) 00:16:52.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.732 Nvme0n1 : 10.01 18862.55 73.68 0.00 0.00 6781.93 3686.40 11304.96 00:16:52.732 =================================================================================================================== 00:16:52.732 Total : 18862.55 73.68 0.00 0.00 6781.93 3686.40 11304.96 00:16:52.732 0 00:16:52.732 11:54:46 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1909464 00:16:52.732 11:54:46 -- common/autotest_common.sh@926 -- # '[' -z 1909464 ']' 00:16:52.732 11:54:46 -- common/autotest_common.sh@930 -- # kill -0 1909464 00:16:52.732 11:54:46 -- common/autotest_common.sh@931 -- # uname 00:16:52.732 11:54:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:52.732 11:54:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1909464 00:16:52.992 11:54:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:52.992 11:54:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:52.992 11:54:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1909464' 00:16:52.992 killing process with pid 1909464 00:16:52.992 11:54:46 -- common/autotest_common.sh@945 -- # kill 1909464 00:16:52.992 Received shutdown signal, test time was about 10.000000 seconds 00:16:52.992 00:16:52.992 Latency(us) 00:16:52.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.992 =================================================================================================================== 00:16:52.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:52.992 11:54:46 -- common/autotest_common.sh@950 -- # wait 1909464 00:16:52.992 11:54:46 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:53.253 11:54:46 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:53.253 11:54:46 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:53.253 11:54:46 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:53.253 11:54:46 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:53.253 11:54:46 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:53.514 [2024-06-10 11:54:47.071707] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:53.514 11:54:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:53.514 11:54:47 -- common/autotest_common.sh@640 -- # local es=0 00:16:53.514 11:54:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:53.514 11:54:47 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:53.514 11:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.514 11:54:47 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:53.514 11:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.514 11:54:47 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:53.514 11:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.514 11:54:47 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:53.514 11:54:47 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:53.514 11:54:47 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:53.514 request: 00:16:53.514 { 00:16:53.514 "uuid": "9856aed6-c7df-4715-88d7-cf4bd8584741", 00:16:53.514 "method": "bdev_lvol_get_lvstores", 00:16:53.514 "req_id": 1 00:16:53.514 } 00:16:53.514 Got JSON-RPC error response 00:16:53.514 response: 00:16:53.514 { 00:16:53.514 "code": -19, 00:16:53.514 "message": "No such device" 00:16:53.514 } 00:16:53.514 11:54:47 -- common/autotest_common.sh@643 -- # es=1 00:16:53.514 11:54:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:53.514 11:54:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:53.514 11:54:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:53.514 11:54:47 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:53.774 aio_bdev 00:16:53.774 11:54:47 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2c208624-e819-436a-93bd-b1591d0f8315 00:16:53.774 11:54:47 -- common/autotest_common.sh@887 -- # local bdev_name=2c208624-e819-436a-93bd-b1591d0f8315 00:16:53.774 11:54:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:53.774 11:54:47 -- common/autotest_common.sh@889 -- # local i 00:16:53.774 11:54:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:53.774 11:54:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:53.774 11:54:47 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:54.035 11:54:47 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2c208624-e819-436a-93bd-b1591d0f8315 -t 2000 00:16:54.035 [ 00:16:54.035 { 00:16:54.035 "name": "2c208624-e819-436a-93bd-b1591d0f8315", 00:16:54.035 "aliases": [ 00:16:54.035 "lvs/lvol" 00:16:54.035 ], 00:16:54.035 "product_name": "Logical Volume", 00:16:54.035 "block_size": 4096, 00:16:54.035 "num_blocks": 38912, 00:16:54.035 "uuid": "2c208624-e819-436a-93bd-b1591d0f8315", 00:16:54.035 "assigned_rate_limits": { 00:16:54.035 "rw_ios_per_sec": 0, 00:16:54.035 "rw_mbytes_per_sec": 0, 00:16:54.035 "r_mbytes_per_sec": 0, 00:16:54.035 "w_mbytes_per_sec": 0 00:16:54.035 }, 00:16:54.035 "claimed": false, 00:16:54.035 "zoned": false, 00:16:54.035 "supported_io_types": { 00:16:54.035 "read": true, 00:16:54.035 "write": true, 00:16:54.035 "unmap": true, 00:16:54.035 "write_zeroes": true, 00:16:54.035 "flush": false, 00:16:54.035 "reset": true, 00:16:54.035 "compare": false, 00:16:54.035 "compare_and_write": false, 00:16:54.035 "abort": false, 00:16:54.035 "nvme_admin": false, 00:16:54.035 "nvme_io": false 00:16:54.035 }, 00:16:54.035 "driver_specific": { 00:16:54.035 "lvol": { 00:16:54.035 "lvol_store_uuid": "9856aed6-c7df-4715-88d7-cf4bd8584741", 00:16:54.035 "base_bdev": "aio_bdev", 00:16:54.035 "thin_provision": false, 00:16:54.035 "snapshot": false, 00:16:54.035 "clone": false, 00:16:54.035 "esnap_clone": false 00:16:54.035 } 00:16:54.035 } 00:16:54.035 } 00:16:54.035 ] 00:16:54.035 11:54:47 -- common/autotest_common.sh@895 -- # return 0 00:16:54.035 11:54:47 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:54.035 11:54:47 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:54.296 11:54:47 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:54.296 11:54:47 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:54.296 11:54:47 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:54.296 11:54:48 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:54.296 11:54:48 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2c208624-e819-436a-93bd-b1591d0f8315 00:16:54.557 11:54:48 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9856aed6-c7df-4715-88d7-cf4bd8584741 00:16:54.557 11:54:48 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.818 00:16:54.818 real 0m15.002s 00:16:54.818 user 0m14.729s 00:16:54.818 sys 0m1.252s 00:16:54.818 11:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.818 11:54:48 -- common/autotest_common.sh@10 -- # set +x 00:16:54.818 ************************************ 00:16:54.818 END TEST lvs_grow_clean 00:16:54.818 ************************************ 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:54.818 11:54:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:54.818 11:54:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:54.818 11:54:48 -- common/autotest_common.sh@10 -- # set +x 00:16:54.818 ************************************ 00:16:54.818 START TEST lvs_grow_dirty 00:16:54.818 ************************************ 00:16:54.818 11:54:48 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.818 11:54:48 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:55.079 11:54:48 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:55.079 11:54:48 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:55.340 11:54:48 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:16:55.340 11:54:48 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:16:55.340 11:54:48 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:55.340 11:54:49 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:55.340 11:54:49 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:55.340 11:54:49 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 lvol 150 00:16:55.601 11:54:49 -- target/nvmf_lvs_grow.sh@33 -- # lvol=06fc57be-c89d-4de1-a38b-23253c910e08 00:16:55.601 11:54:49 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:55.601 11:54:49 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:55.601 [2024-06-10 11:54:49.327257] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:55.601 [2024-06-10 11:54:49.327307] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:55.601 true 00:16:55.601 11:54:49 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:16:55.601 11:54:49 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:55.862 11:54:49 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:55.862 11:54:49 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:56.122 11:54:49 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 06fc57be-c89d-4de1-a38b-23253c910e08 00:16:56.122 11:54:49 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:56.383 11:54:49 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:56.383 11:54:50 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:56.383 11:54:50 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1912500 00:16:56.383 11:54:50 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.383 11:54:50 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1912500 /var/tmp/bdevperf.sock 00:16:56.383 11:54:50 -- common/autotest_common.sh@819 -- # '[' -z 1912500 ']' 00:16:56.383 11:54:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.383 11:54:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.383 11:54:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.383 11:54:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.383 11:54:50 -- common/autotest_common.sh@10 -- # set +x 00:16:56.383 [2024-06-10 11:54:50.098852] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:56.384 [2024-06-10 11:54:50.098907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912500 ] 00:16:56.384 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.645 [2024-06-10 11:54:50.173658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.645 [2024-06-10 11:54:50.225802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.216 11:54:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.216 11:54:50 -- common/autotest_common.sh@852 -- # return 0 00:16:57.216 11:54:50 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:57.477 Nvme0n1 00:16:57.477 11:54:51 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:57.738 [ 00:16:57.738 { 00:16:57.738 "name": "Nvme0n1", 00:16:57.738 "aliases": [ 00:16:57.738 "06fc57be-c89d-4de1-a38b-23253c910e08" 00:16:57.738 ], 00:16:57.738 "product_name": "NVMe disk", 00:16:57.738 "block_size": 4096, 00:16:57.738 "num_blocks": 38912, 00:16:57.738 "uuid": "06fc57be-c89d-4de1-a38b-23253c910e08", 00:16:57.738 "assigned_rate_limits": { 00:16:57.738 "rw_ios_per_sec": 0, 00:16:57.738 "rw_mbytes_per_sec": 0, 00:16:57.738 "r_mbytes_per_sec": 0, 00:16:57.738 "w_mbytes_per_sec": 0 00:16:57.738 }, 00:16:57.738 "claimed": false, 00:16:57.738 "zoned": false, 00:16:57.738 "supported_io_types": { 00:16:57.738 "read": true, 00:16:57.738 "write": true, 00:16:57.738 "unmap": true, 00:16:57.738 "write_zeroes": true, 00:16:57.738 "flush": true, 00:16:57.738 "reset": true, 00:16:57.738 "compare": true, 00:16:57.738 "compare_and_write": true, 00:16:57.738 "abort": true, 00:16:57.738 "nvme_admin": true, 00:16:57.738 "nvme_io": true 00:16:57.738 }, 00:16:57.738 "driver_specific": { 00:16:57.738 "nvme": [ 00:16:57.738 { 00:16:57.738 "trid": { 00:16:57.738 "trtype": "TCP", 00:16:57.738 "adrfam": "IPv4", 00:16:57.738 "traddr": "10.0.0.2", 00:16:57.738 "trsvcid": "4420", 00:16:57.738 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:57.738 }, 00:16:57.738 "ctrlr_data": { 00:16:57.738 "cntlid": 1, 00:16:57.738 "vendor_id": "0x8086", 00:16:57.738 "model_number": "SPDK bdev Controller", 00:16:57.738 "serial_number": "SPDK0", 00:16:57.738 "firmware_revision": "24.01.1", 00:16:57.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:57.738 "oacs": { 00:16:57.738 "security": 0, 00:16:57.738 "format": 0, 00:16:57.738 "firmware": 0, 00:16:57.738 "ns_manage": 0 00:16:57.738 }, 00:16:57.738 "multi_ctrlr": true, 00:16:57.738 "ana_reporting": false 00:16:57.738 }, 00:16:57.738 "vs": { 00:16:57.738 "nvme_version": "1.3" 00:16:57.738 }, 00:16:57.738 "ns_data": { 00:16:57.738 "id": 1, 00:16:57.738 "can_share": true 00:16:57.738 } 00:16:57.738 } 00:16:57.738 ], 00:16:57.738 "mp_policy": "active_passive" 00:16:57.738 } 00:16:57.738 } 00:16:57.738 ] 00:16:57.738 11:54:51 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1912836 00:16:57.738 11:54:51 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:57.738 11:54:51 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.738 Running I/O for 10 seconds... 00:16:58.680 Latency(us) 00:16:58.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.680 Nvme0n1 : 1.00 18634.00 72.79 0.00 0.00 0.00 0.00 0.00 00:16:58.680 =================================================================================================================== 00:16:58.680 Total : 18634.00 72.79 0.00 0.00 0.00 0.00 0.00 00:16:58.680 00:16:59.622 11:54:53 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:16:59.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.622 Nvme0n1 : 2.00 18753.50 73.26 0.00 0.00 0.00 0.00 0.00 00:16:59.622 =================================================================================================================== 00:16:59.622 Total : 18753.50 73.26 0.00 0.00 0.00 0.00 0.00 00:16:59.622 00:16:59.883 true 00:16:59.883 11:54:53 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:16:59.883 11:54:53 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:59.883 11:54:53 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:59.883 11:54:53 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:59.883 11:54:53 -- target/nvmf_lvs_grow.sh@65 -- # wait 1912836 00:17:00.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.826 Nvme0n1 : 3.00 18798.00 73.43 0.00 0.00 0.00 0.00 0.00 00:17:00.826 =================================================================================================================== 00:17:00.826 Total : 18798.00 73.43 0.00 0.00 0.00 0.00 0.00 00:17:00.826 00:17:01.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.769 Nvme0n1 : 4.00 18834.50 73.57 0.00 0.00 0.00 0.00 0.00 00:17:01.769 =================================================================================================================== 00:17:01.769 Total : 18834.50 73.57 0.00 0.00 0.00 0.00 0.00 00:17:01.769 00:17:02.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.713 Nvme0n1 : 5.00 18855.00 73.65 0.00 0.00 0.00 0.00 0.00 00:17:02.713 =================================================================================================================== 00:17:02.713 Total : 18855.00 73.65 0.00 0.00 0.00 0.00 0.00 00:17:02.713 00:17:03.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.656 Nvme0n1 : 6.00 18881.67 73.76 0.00 0.00 0.00 0.00 0.00 00:17:03.656 =================================================================================================================== 00:17:03.656 Total : 18881.67 73.76 0.00 0.00 0.00 0.00 0.00 00:17:03.656 00:17:04.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.598 Nvme0n1 : 7.00 18898.71 73.82 0.00 0.00 0.00 0.00 0.00 00:17:04.598 =================================================================================================================== 00:17:04.598 Total : 18898.71 73.82 0.00 0.00 0.00 0.00 0.00 00:17:04.598 00:17:05.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.984 Nvme0n1 : 8.00 18904.38 73.85 0.00 0.00 0.00 0.00 0.00 00:17:05.984 =================================================================================================================== 00:17:05.984 Total : 18904.38 73.85 0.00 0.00 0.00 0.00 0.00 00:17:05.984 00:17:06.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.927 Nvme0n1 : 9.00 18910.67 73.87 0.00 0.00 0.00 0.00 0.00 00:17:06.927 =================================================================================================================== 00:17:06.927 Total : 18910.67 73.87 0.00 0.00 0.00 0.00 0.00 00:17:06.927 00:17:07.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.870 Nvme0n1 : 10.00 18919.40 73.90 0.00 0.00 0.00 0.00 0.00 00:17:07.870 =================================================================================================================== 00:17:07.870 Total : 18919.40 73.90 0.00 0.00 0.00 0.00 0.00 00:17:07.870 00:17:07.870 00:17:07.870 Latency(us) 00:17:07.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.870 Nvme0n1 : 10.00 18923.33 73.92 0.00 0.00 6761.01 3604.48 11304.96 00:17:07.870 =================================================================================================================== 00:17:07.870 Total : 18923.33 73.92 0.00 0.00 6761.01 3604.48 11304.96 00:17:07.870 0 00:17:07.870 11:55:01 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1912500 00:17:07.870 11:55:01 -- common/autotest_common.sh@926 -- # '[' -z 1912500 ']' 00:17:07.870 11:55:01 -- common/autotest_common.sh@930 -- # kill -0 1912500 00:17:07.870 11:55:01 -- common/autotest_common.sh@931 -- # uname 00:17:07.870 11:55:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:07.870 11:55:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1912500 00:17:07.870 11:55:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:07.870 11:55:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:07.870 11:55:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1912500' 00:17:07.870 killing process with pid 1912500 00:17:07.870 11:55:01 -- common/autotest_common.sh@945 -- # kill 1912500 00:17:07.870 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.870 00:17:07.871 Latency(us) 00:17:07.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.871 =================================================================================================================== 00:17:07.871 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.871 11:55:01 -- common/autotest_common.sh@950 -- # wait 1912500 00:17:07.871 11:55:01 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:08.132 11:55:01 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:17:08.132 11:55:01 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:08.392 11:55:01 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:08.392 11:55:01 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:08.392 11:55:01 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1908996 00:17:08.392 11:55:01 -- target/nvmf_lvs_grow.sh@74 -- # wait 1908996 00:17:08.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1908996 Killed "${NVMF_APP[@]}" "$@" 00:17:08.392 11:55:01 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:08.392 11:55:01 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:08.392 11:55:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.392 11:55:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:08.392 11:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:08.392 11:55:01 -- nvmf/common.sh@469 -- # nvmfpid=1914878 00:17:08.392 11:55:01 -- nvmf/common.sh@470 -- # waitforlisten 1914878 00:17:08.392 11:55:01 -- common/autotest_common.sh@819 -- # '[' -z 1914878 ']' 00:17:08.392 11:55:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:08.392 11:55:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.392 11:55:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.392 11:55:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.392 11:55:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.392 11:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:08.392 [2024-06-10 11:55:01.999152] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:08.393 [2024-06-10 11:55:01.999202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.393 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.393 [2024-06-10 11:55:02.063096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.393 [2024-06-10 11:55:02.125730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:08.393 [2024-06-10 11:55:02.125847] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.393 [2024-06-10 11:55:02.125855] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.393 [2024-06-10 11:55:02.125863] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.393 [2024-06-10 11:55:02.125879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.963 11:55:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:08.963 11:55:02 -- common/autotest_common.sh@852 -- # return 0 00:17:08.964 11:55:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:08.964 11:55:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:08.964 11:55:02 -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 11:55:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.225 11:55:02 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.225 [2024-06-10 11:55:02.902714] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:09.225 [2024-06-10 11:55:02.902808] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:09.225 [2024-06-10 11:55:02.902838] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:09.225 11:55:02 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:09.225 11:55:02 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 06fc57be-c89d-4de1-a38b-23253c910e08 00:17:09.225 11:55:02 -- common/autotest_common.sh@887 -- # local bdev_name=06fc57be-c89d-4de1-a38b-23253c910e08 00:17:09.225 11:55:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:09.225 11:55:02 -- common/autotest_common.sh@889 -- # local i 00:17:09.225 11:55:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:09.225 11:55:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:09.225 11:55:02 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:09.487 11:55:03 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 06fc57be-c89d-4de1-a38b-23253c910e08 -t 2000 00:17:09.487 [ 00:17:09.487 { 00:17:09.487 "name": "06fc57be-c89d-4de1-a38b-23253c910e08", 00:17:09.487 "aliases": [ 00:17:09.487 "lvs/lvol" 00:17:09.487 ], 00:17:09.487 "product_name": "Logical Volume", 00:17:09.487 "block_size": 4096, 00:17:09.487 "num_blocks": 38912, 00:17:09.487 "uuid": "06fc57be-c89d-4de1-a38b-23253c910e08", 00:17:09.487 "assigned_rate_limits": { 00:17:09.487 "rw_ios_per_sec": 0, 00:17:09.487 "rw_mbytes_per_sec": 0, 00:17:09.487 "r_mbytes_per_sec": 0, 00:17:09.487 "w_mbytes_per_sec": 0 00:17:09.487 }, 00:17:09.487 "claimed": false, 00:17:09.487 "zoned": false, 00:17:09.487 "supported_io_types": { 00:17:09.487 "read": true, 00:17:09.487 "write": true, 00:17:09.487 "unmap": true, 00:17:09.487 "write_zeroes": true, 00:17:09.487 "flush": false, 00:17:09.487 "reset": true, 00:17:09.487 "compare": false, 00:17:09.487 "compare_and_write": false, 00:17:09.487 "abort": false, 00:17:09.487 "nvme_admin": false, 00:17:09.487 "nvme_io": false 00:17:09.487 }, 00:17:09.487 "driver_specific": { 00:17:09.487 "lvol": { 00:17:09.487 "lvol_store_uuid": "4ec29a78-7be4-4767-8fc0-a066c026e0d4", 00:17:09.487 "base_bdev": "aio_bdev", 00:17:09.487 "thin_provision": false, 00:17:09.487 "snapshot": false, 00:17:09.487 "clone": false, 00:17:09.487 "esnap_clone": false 00:17:09.487 } 00:17:09.487 } 00:17:09.487 } 00:17:09.487 ] 00:17:09.487 11:55:03 -- common/autotest_common.sh@895 -- # return 0 00:17:09.487 11:55:03 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:17:09.487 11:55:03 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:09.749 11:55:03 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:09.749 11:55:03 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:17:09.749 11:55:03 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:09.749 11:55:03 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:09.749 11:55:03 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:10.010 [2024-06-10 11:55:03.606500] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:10.010 11:55:03 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:17:10.010 11:55:03 -- common/autotest_common.sh@640 -- # local es=0 00:17:10.010 11:55:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:17:10.010 11:55:03 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.010 11:55:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.010 11:55:03 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.010 11:55:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.010 11:55:03 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.010 11:55:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.010 11:55:03 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.010 11:55:03 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:10.010 11:55:03 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:17:10.010 request: 00:17:10.010 { 00:17:10.010 "uuid": "4ec29a78-7be4-4767-8fc0-a066c026e0d4", 00:17:10.010 "method": "bdev_lvol_get_lvstores", 00:17:10.010 "req_id": 1 00:17:10.010 } 00:17:10.010 Got JSON-RPC error response 00:17:10.010 response: 00:17:10.010 { 00:17:10.010 "code": -19, 00:17:10.010 "message": "No such device" 00:17:10.010 } 00:17:10.271 11:55:03 -- common/autotest_common.sh@643 -- # es=1 00:17:10.271 11:55:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:10.271 11:55:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:10.271 11:55:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:10.271 11:55:03 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:10.271 aio_bdev 00:17:10.271 11:55:03 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 06fc57be-c89d-4de1-a38b-23253c910e08 00:17:10.271 11:55:03 -- common/autotest_common.sh@887 -- # local bdev_name=06fc57be-c89d-4de1-a38b-23253c910e08 00:17:10.271 11:55:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:10.271 11:55:03 -- common/autotest_common.sh@889 -- # local i 00:17:10.271 11:55:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:10.271 11:55:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:10.271 11:55:03 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:10.533 11:55:04 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 06fc57be-c89d-4de1-a38b-23253c910e08 -t 2000 00:17:10.533 [ 00:17:10.533 { 00:17:10.533 "name": "06fc57be-c89d-4de1-a38b-23253c910e08", 00:17:10.533 "aliases": [ 00:17:10.533 "lvs/lvol" 00:17:10.533 ], 00:17:10.533 "product_name": "Logical Volume", 00:17:10.533 "block_size": 4096, 00:17:10.533 "num_blocks": 38912, 00:17:10.533 "uuid": "06fc57be-c89d-4de1-a38b-23253c910e08", 00:17:10.533 "assigned_rate_limits": { 00:17:10.533 "rw_ios_per_sec": 0, 00:17:10.533 "rw_mbytes_per_sec": 0, 00:17:10.533 "r_mbytes_per_sec": 0, 00:17:10.533 "w_mbytes_per_sec": 0 00:17:10.533 }, 00:17:10.533 "claimed": false, 00:17:10.533 "zoned": false, 00:17:10.533 "supported_io_types": { 00:17:10.533 "read": true, 00:17:10.533 "write": true, 00:17:10.533 "unmap": true, 00:17:10.533 "write_zeroes": true, 00:17:10.533 "flush": false, 00:17:10.533 "reset": true, 00:17:10.533 "compare": false, 00:17:10.533 "compare_and_write": false, 00:17:10.533 "abort": false, 00:17:10.533 "nvme_admin": false, 00:17:10.533 "nvme_io": false 00:17:10.533 }, 00:17:10.533 "driver_specific": { 00:17:10.533 "lvol": { 00:17:10.533 "lvol_store_uuid": "4ec29a78-7be4-4767-8fc0-a066c026e0d4", 00:17:10.533 "base_bdev": "aio_bdev", 00:17:10.533 "thin_provision": false, 00:17:10.533 "snapshot": false, 00:17:10.533 "clone": false, 00:17:10.533 "esnap_clone": false 00:17:10.533 } 00:17:10.533 } 00:17:10.533 } 00:17:10.533 ] 00:17:10.533 11:55:04 -- common/autotest_common.sh@895 -- # return 0 00:17:10.533 11:55:04 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:17:10.533 11:55:04 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:10.793 11:55:04 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:10.793 11:55:04 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:17:10.793 11:55:04 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:10.793 11:55:04 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:10.793 11:55:04 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 06fc57be-c89d-4de1-a38b-23253c910e08 00:17:11.054 11:55:04 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ec29a78-7be4-4767-8fc0-a066c026e0d4 00:17:11.314 11:55:04 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:11.314 11:55:04 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:11.314 00:17:11.314 real 0m16.486s 00:17:11.314 user 0m43.391s 00:17:11.314 sys 0m2.735s 00:17:11.314 11:55:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.314 11:55:05 -- common/autotest_common.sh@10 -- # set +x 00:17:11.314 ************************************ 00:17:11.314 END TEST lvs_grow_dirty 00:17:11.314 ************************************ 00:17:11.314 11:55:05 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:11.314 11:55:05 -- common/autotest_common.sh@796 -- # type=--id 00:17:11.314 11:55:05 -- common/autotest_common.sh@797 -- # id=0 00:17:11.314 11:55:05 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:11.314 11:55:05 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:11.314 11:55:05 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:11.314 11:55:05 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:11.314 11:55:05 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:11.314 11:55:05 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:11.314 nvmf_trace.0 00:17:11.574 11:55:05 -- common/autotest_common.sh@811 -- # return 0 00:17:11.574 11:55:05 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:11.574 11:55:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:11.574 11:55:05 -- nvmf/common.sh@116 -- # sync 00:17:11.574 11:55:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:11.574 11:55:05 -- nvmf/common.sh@119 -- # set +e 00:17:11.574 11:55:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:11.574 11:55:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:11.574 rmmod nvme_tcp 00:17:11.574 rmmod nvme_fabrics 00:17:11.574 rmmod nvme_keyring 00:17:11.574 11:55:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:11.574 11:55:05 -- nvmf/common.sh@123 -- # set -e 00:17:11.574 11:55:05 -- nvmf/common.sh@124 -- # return 0 00:17:11.574 11:55:05 -- nvmf/common.sh@477 -- # '[' -n 1914878 ']' 00:17:11.574 11:55:05 -- nvmf/common.sh@478 -- # killprocess 1914878 00:17:11.574 11:55:05 -- common/autotest_common.sh@926 -- # '[' -z 1914878 ']' 00:17:11.574 11:55:05 -- common/autotest_common.sh@930 -- # kill -0 1914878 00:17:11.574 11:55:05 -- common/autotest_common.sh@931 -- # uname 00:17:11.574 11:55:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:11.574 11:55:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1914878 00:17:11.574 11:55:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:11.574 11:55:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:11.574 11:55:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1914878' 00:17:11.574 killing process with pid 1914878 00:17:11.574 11:55:05 -- common/autotest_common.sh@945 -- # kill 1914878 00:17:11.575 11:55:05 -- common/autotest_common.sh@950 -- # wait 1914878 00:17:11.836 11:55:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:11.836 11:55:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:11.836 11:55:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:11.836 11:55:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.836 11:55:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:11.836 11:55:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.836 11:55:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.836 11:55:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.845 11:55:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:13.845 00:17:13.845 real 0m42.317s 00:17:13.845 user 1m3.863s 00:17:13.845 sys 0m9.668s 00:17:13.845 11:55:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.845 11:55:07 -- common/autotest_common.sh@10 -- # set +x 00:17:13.845 ************************************ 00:17:13.845 END TEST nvmf_lvs_grow 00:17:13.845 ************************************ 00:17:13.845 11:55:07 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:13.845 11:55:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:13.845 11:55:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:13.845 11:55:07 -- common/autotest_common.sh@10 -- # set +x 00:17:13.845 ************************************ 00:17:13.845 START TEST nvmf_bdev_io_wait 00:17:13.845 ************************************ 00:17:13.845 11:55:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:13.845 * Looking for test storage... 00:17:13.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.845 11:55:07 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.845 11:55:07 -- nvmf/common.sh@7 -- # uname -s 00:17:13.845 11:55:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.845 11:55:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.845 11:55:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.845 11:55:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.845 11:55:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.845 11:55:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.845 11:55:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.845 11:55:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.845 11:55:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.845 11:55:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.845 11:55:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:13.845 11:55:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:13.845 11:55:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.845 11:55:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.845 11:55:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.845 11:55:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.845 11:55:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.845 11:55:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.845 11:55:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.845 11:55:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.845 11:55:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.845 11:55:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.845 11:55:07 -- paths/export.sh@5 -- # export PATH 00:17:13.845 11:55:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.845 11:55:07 -- nvmf/common.sh@46 -- # : 0 00:17:13.845 11:55:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:13.845 11:55:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:13.845 11:55:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:13.845 11:55:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.845 11:55:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.845 11:55:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:13.845 11:55:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:13.845 11:55:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:13.845 11:55:07 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.845 11:55:07 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.845 11:55:07 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:13.845 11:55:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:13.845 11:55:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.106 11:55:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:14.107 11:55:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:14.107 11:55:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:14.107 11:55:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.107 11:55:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.107 11:55:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.107 11:55:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:14.107 11:55:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:14.107 11:55:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:14.107 11:55:07 -- common/autotest_common.sh@10 -- # set +x 00:17:20.695 11:55:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:20.695 11:55:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:20.695 11:55:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:20.695 11:55:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:20.695 11:55:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:20.695 11:55:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:20.695 11:55:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:20.695 11:55:14 -- nvmf/common.sh@294 -- # net_devs=() 00:17:20.695 11:55:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:20.695 11:55:14 -- nvmf/common.sh@295 -- # e810=() 00:17:20.695 11:55:14 -- nvmf/common.sh@295 -- # local -ga e810 00:17:20.695 11:55:14 -- nvmf/common.sh@296 -- # x722=() 00:17:20.695 11:55:14 -- nvmf/common.sh@296 -- # local -ga x722 00:17:20.695 11:55:14 -- nvmf/common.sh@297 -- # mlx=() 00:17:20.695 11:55:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:20.695 11:55:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.695 11:55:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:20.695 11:55:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:20.695 11:55:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:20.695 11:55:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.695 11:55:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:20.695 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:20.695 11:55:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.695 11:55:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:20.695 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:20.695 11:55:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:20.695 11:55:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:20.695 11:55:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.696 11:55:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.696 11:55:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.696 11:55:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.696 11:55:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:20.696 Found net devices under 0000:31:00.0: cvl_0_0 00:17:20.696 11:55:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.696 11:55:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.696 11:55:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.696 11:55:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.696 11:55:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.696 11:55:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:20.696 Found net devices under 0000:31:00.1: cvl_0_1 00:17:20.696 11:55:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.696 11:55:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:20.696 11:55:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:20.696 11:55:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:20.696 11:55:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:20.696 11:55:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:20.696 11:55:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.696 11:55:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.696 11:55:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.696 11:55:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:20.696 11:55:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.696 11:55:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.696 11:55:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:20.696 11:55:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.696 11:55:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.696 11:55:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:20.696 11:55:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:20.696 11:55:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.957 11:55:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.957 11:55:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.957 11:55:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.957 11:55:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:20.957 11:55:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.957 11:55:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.219 11:55:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.219 11:55:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:21.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:17:21.219 00:17:21.219 --- 10.0.0.2 ping statistics --- 00:17:21.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.219 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:17:21.219 11:55:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:17:21.219 00:17:21.219 --- 10.0.0.1 ping statistics --- 00:17:21.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.219 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:17:21.219 11:55:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.219 11:55:14 -- nvmf/common.sh@410 -- # return 0 00:17:21.219 11:55:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:21.219 11:55:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.219 11:55:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:21.219 11:55:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:21.219 11:55:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.219 11:55:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:21.219 11:55:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:21.219 11:55:14 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:21.219 11:55:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:21.219 11:55:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:21.219 11:55:14 -- common/autotest_common.sh@10 -- # set +x 00:17:21.219 11:55:14 -- nvmf/common.sh@469 -- # nvmfpid=1919727 00:17:21.219 11:55:14 -- nvmf/common.sh@470 -- # waitforlisten 1919727 00:17:21.219 11:55:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:21.219 11:55:14 -- common/autotest_common.sh@819 -- # '[' -z 1919727 ']' 00:17:21.219 11:55:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.219 11:55:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.219 11:55:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.219 11:55:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.219 11:55:14 -- common/autotest_common.sh@10 -- # set +x 00:17:21.219 [2024-06-10 11:55:14.857292] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:21.219 [2024-06-10 11:55:14.857353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.219 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.219 [2024-06-10 11:55:14.927867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.480 [2024-06-10 11:55:15.002694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.480 [2024-06-10 11:55:15.002830] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.480 [2024-06-10 11:55:15.002841] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.480 [2024-06-10 11:55:15.002849] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.480 [2024-06-10 11:55:15.002988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.480 [2024-06-10 11:55:15.003101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.480 [2024-06-10 11:55:15.003254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.481 [2024-06-10 11:55:15.003264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.054 11:55:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.054 11:55:15 -- common/autotest_common.sh@852 -- # return 0 00:17:22.054 11:55:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.054 11:55:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:22.054 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:17:22.054 11:55:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:22.054 11:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.054 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:17:22.054 11:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:22.054 11:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.054 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:17:22.054 11:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:22.054 11:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.054 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:17:22.054 [2024-06-10 11:55:15.739289] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.054 11:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:22.054 11:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.054 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:17:22.054 Malloc0 00:17:22.054 11:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:22.054 11:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.054 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:17:22.054 11:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.054 11:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.054 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:17:22.054 11:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.054 11:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.054 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:17:22.054 [2024-06-10 11:55:15.803638] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.054 11:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1920055 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@30 -- # READ_PID=1920057 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:22.054 11:55:15 -- nvmf/common.sh@520 -- # config=() 00:17:22.054 11:55:15 -- nvmf/common.sh@520 -- # local subsystem config 00:17:22.054 11:55:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:22.054 11:55:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:22.054 { 00:17:22.054 "params": { 00:17:22.054 "name": "Nvme$subsystem", 00:17:22.054 "trtype": "$TEST_TRANSPORT", 00:17:22.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.054 "adrfam": "ipv4", 00:17:22.054 "trsvcid": "$NVMF_PORT", 00:17:22.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.054 "hdgst": ${hdgst:-false}, 00:17:22.054 "ddgst": ${ddgst:-false} 00:17:22.054 }, 00:17:22.054 "method": "bdev_nvme_attach_controller" 00:17:22.054 } 00:17:22.054 EOF 00:17:22.054 )") 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1920059 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:22.054 11:55:15 -- nvmf/common.sh@520 -- # config=() 00:17:22.054 11:55:15 -- nvmf/common.sh@520 -- # local subsystem config 00:17:22.054 11:55:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:22.054 11:55:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:22.054 { 00:17:22.054 "params": { 00:17:22.054 "name": "Nvme$subsystem", 00:17:22.054 "trtype": "$TEST_TRANSPORT", 00:17:22.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.054 "adrfam": "ipv4", 00:17:22.054 "trsvcid": "$NVMF_PORT", 00:17:22.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.054 "hdgst": ${hdgst:-false}, 00:17:22.054 "ddgst": ${ddgst:-false} 00:17:22.054 }, 00:17:22.054 "method": "bdev_nvme_attach_controller" 00:17:22.054 } 00:17:22.054 EOF 00:17:22.054 )") 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1920062 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@35 -- # sync 00:17:22.054 11:55:15 -- nvmf/common.sh@542 -- # cat 00:17:22.054 11:55:15 -- nvmf/common.sh@520 -- # config=() 00:17:22.054 11:55:15 -- nvmf/common.sh@520 -- # local subsystem config 00:17:22.054 11:55:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:22.054 11:55:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:22.054 { 00:17:22.054 "params": { 00:17:22.054 "name": "Nvme$subsystem", 00:17:22.054 "trtype": "$TEST_TRANSPORT", 00:17:22.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.054 "adrfam": "ipv4", 00:17:22.054 "trsvcid": "$NVMF_PORT", 00:17:22.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.054 "hdgst": ${hdgst:-false}, 00:17:22.054 "ddgst": ${ddgst:-false} 00:17:22.054 }, 00:17:22.054 "method": "bdev_nvme_attach_controller" 00:17:22.054 } 00:17:22.054 EOF 00:17:22.054 )") 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:22.054 11:55:15 -- nvmf/common.sh@520 -- # config=() 00:17:22.054 11:55:15 -- nvmf/common.sh@542 -- # cat 00:17:22.054 11:55:15 -- nvmf/common.sh@520 -- # local subsystem config 00:17:22.054 11:55:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:22.054 11:55:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:22.054 { 00:17:22.054 "params": { 00:17:22.054 "name": "Nvme$subsystem", 00:17:22.054 "trtype": "$TEST_TRANSPORT", 00:17:22.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.054 "adrfam": "ipv4", 00:17:22.054 "trsvcid": "$NVMF_PORT", 00:17:22.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.054 "hdgst": ${hdgst:-false}, 00:17:22.054 "ddgst": ${ddgst:-false} 00:17:22.054 }, 00:17:22.054 "method": "bdev_nvme_attach_controller" 00:17:22.054 } 00:17:22.054 EOF 00:17:22.054 )") 00:17:22.054 11:55:15 -- nvmf/common.sh@542 -- # cat 00:17:22.054 11:55:15 -- target/bdev_io_wait.sh@37 -- # wait 1920055 00:17:22.054 11:55:15 -- nvmf/common.sh@542 -- # cat 00:17:22.055 11:55:15 -- nvmf/common.sh@544 -- # jq . 00:17:22.055 11:55:15 -- nvmf/common.sh@544 -- # jq . 00:17:22.055 11:55:15 -- nvmf/common.sh@544 -- # jq . 00:17:22.055 11:55:15 -- nvmf/common.sh@545 -- # IFS=, 00:17:22.055 11:55:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:22.055 "params": { 00:17:22.055 "name": "Nvme1", 00:17:22.055 "trtype": "tcp", 00:17:22.055 "traddr": "10.0.0.2", 00:17:22.055 "adrfam": "ipv4", 00:17:22.055 "trsvcid": "4420", 00:17:22.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.055 "hdgst": false, 00:17:22.055 "ddgst": false 00:17:22.055 }, 00:17:22.055 "method": "bdev_nvme_attach_controller" 00:17:22.055 }' 00:17:22.055 11:55:15 -- nvmf/common.sh@544 -- # jq . 00:17:22.316 11:55:15 -- nvmf/common.sh@545 -- # IFS=, 00:17:22.316 11:55:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:22.316 "params": { 00:17:22.316 "name": "Nvme1", 00:17:22.316 "trtype": "tcp", 00:17:22.316 "traddr": "10.0.0.2", 00:17:22.316 "adrfam": "ipv4", 00:17:22.316 "trsvcid": "4420", 00:17:22.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.316 "hdgst": false, 00:17:22.316 "ddgst": false 00:17:22.316 }, 00:17:22.316 "method": "bdev_nvme_attach_controller" 00:17:22.316 }' 00:17:22.316 11:55:15 -- nvmf/common.sh@545 -- # IFS=, 00:17:22.316 11:55:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:22.316 "params": { 00:17:22.316 "name": "Nvme1", 00:17:22.316 "trtype": "tcp", 00:17:22.316 "traddr": "10.0.0.2", 00:17:22.316 "adrfam": "ipv4", 00:17:22.316 "trsvcid": "4420", 00:17:22.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.316 "hdgst": false, 00:17:22.316 "ddgst": false 00:17:22.316 }, 00:17:22.316 "method": "bdev_nvme_attach_controller" 00:17:22.316 }' 00:17:22.316 11:55:15 -- nvmf/common.sh@545 -- # IFS=, 00:17:22.316 11:55:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:22.316 "params": { 00:17:22.316 "name": "Nvme1", 00:17:22.316 "trtype": "tcp", 00:17:22.316 "traddr": "10.0.0.2", 00:17:22.316 "adrfam": "ipv4", 00:17:22.316 "trsvcid": "4420", 00:17:22.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.316 "hdgst": false, 00:17:22.316 "ddgst": false 00:17:22.317 }, 00:17:22.317 "method": "bdev_nvme_attach_controller" 00:17:22.317 }' 00:17:22.317 [2024-06-10 11:55:15.853817] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:22.317 [2024-06-10 11:55:15.853870] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:22.317 [2024-06-10 11:55:15.853984] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:22.317 [2024-06-10 11:55:15.854029] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:22.317 [2024-06-10 11:55:15.854737] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:22.317 [2024-06-10 11:55:15.854779] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:22.317 [2024-06-10 11:55:15.857235] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:22.317 [2024-06-10 11:55:15.857296] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:22.317 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.317 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.317 [2024-06-10 11:55:16.001318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.317 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.317 [2024-06-10 11:55:16.050069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:22.317 [2024-06-10 11:55:16.058188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.317 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.577 [2024-06-10 11:55:16.106891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:22.577 [2024-06-10 11:55:16.119596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.577 [2024-06-10 11:55:16.167808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.577 [2024-06-10 11:55:16.168517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:22.577 [2024-06-10 11:55:16.215134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:22.577 Running I/O for 1 seconds... 00:17:22.577 Running I/O for 1 seconds... 00:17:22.838 Running I/O for 1 seconds... 00:17:22.838 Running I/O for 1 seconds... 00:17:23.782 00:17:23.782 Latency(us) 00:17:23.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.782 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:23.782 Nvme1n1 : 1.01 13851.56 54.11 0.00 0.00 9211.58 5734.40 18896.21 00:17:23.782 =================================================================================================================== 00:17:23.782 Total : 13851.56 54.11 0.00 0.00 9211.58 5734.40 18896.21 00:17:23.782 00:17:23.782 Latency(us) 00:17:23.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.782 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:23.782 Nvme1n1 : 1.01 11765.10 45.96 0.00 0.00 10845.08 5297.49 21626.88 00:17:23.782 =================================================================================================================== 00:17:23.782 Total : 11765.10 45.96 0.00 0.00 10845.08 5297.49 21626.88 00:17:23.782 00:17:23.782 Latency(us) 00:17:23.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.782 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:23.782 Nvme1n1 : 1.00 19241.92 75.16 0.00 0.00 6636.41 3741.01 18131.63 00:17:23.782 =================================================================================================================== 00:17:23.782 Total : 19241.92 75.16 0.00 0.00 6636.41 3741.01 18131.63 00:17:23.782 11:55:17 -- target/bdev_io_wait.sh@38 -- # wait 1920057 00:17:23.782 11:55:17 -- target/bdev_io_wait.sh@39 -- # wait 1920059 00:17:23.782 00:17:23.782 Latency(us) 00:17:23.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.782 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:23.782 Nvme1n1 : 1.00 190094.61 742.56 0.00 0.00 670.79 264.53 750.93 00:17:23.782 =================================================================================================================== 00:17:23.782 Total : 190094.61 742.56 0.00 0.00 670.79 264.53 750.93 00:17:24.043 11:55:17 -- target/bdev_io_wait.sh@40 -- # wait 1920062 00:17:24.043 11:55:17 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.043 11:55:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.043 11:55:17 -- common/autotest_common.sh@10 -- # set +x 00:17:24.043 11:55:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.043 11:55:17 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:24.043 11:55:17 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:24.043 11:55:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:24.043 11:55:17 -- nvmf/common.sh@116 -- # sync 00:17:24.043 11:55:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:24.043 11:55:17 -- nvmf/common.sh@119 -- # set +e 00:17:24.043 11:55:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:24.043 11:55:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:24.043 rmmod nvme_tcp 00:17:24.043 rmmod nvme_fabrics 00:17:24.043 rmmod nvme_keyring 00:17:24.043 11:55:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:24.043 11:55:17 -- nvmf/common.sh@123 -- # set -e 00:17:24.043 11:55:17 -- nvmf/common.sh@124 -- # return 0 00:17:24.043 11:55:17 -- nvmf/common.sh@477 -- # '[' -n 1919727 ']' 00:17:24.043 11:55:17 -- nvmf/common.sh@478 -- # killprocess 1919727 00:17:24.043 11:55:17 -- common/autotest_common.sh@926 -- # '[' -z 1919727 ']' 00:17:24.043 11:55:17 -- common/autotest_common.sh@930 -- # kill -0 1919727 00:17:24.043 11:55:17 -- common/autotest_common.sh@931 -- # uname 00:17:24.043 11:55:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.043 11:55:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1919727 00:17:24.043 11:55:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:24.043 11:55:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:24.043 11:55:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1919727' 00:17:24.043 killing process with pid 1919727 00:17:24.043 11:55:17 -- common/autotest_common.sh@945 -- # kill 1919727 00:17:24.043 11:55:17 -- common/autotest_common.sh@950 -- # wait 1919727 00:17:24.304 11:55:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:24.304 11:55:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:24.304 11:55:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:24.304 11:55:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.304 11:55:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:24.304 11:55:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.304 11:55:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.304 11:55:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.219 11:55:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:26.219 00:17:26.219 real 0m12.483s 00:17:26.219 user 0m18.934s 00:17:26.219 sys 0m6.705s 00:17:26.219 11:55:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.219 11:55:19 -- common/autotest_common.sh@10 -- # set +x 00:17:26.219 ************************************ 00:17:26.219 END TEST nvmf_bdev_io_wait 00:17:26.219 ************************************ 00:17:26.480 11:55:20 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:26.480 11:55:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:26.480 11:55:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:26.480 11:55:20 -- common/autotest_common.sh@10 -- # set +x 00:17:26.480 ************************************ 00:17:26.480 START TEST nvmf_queue_depth 00:17:26.480 ************************************ 00:17:26.480 11:55:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:26.480 * Looking for test storage... 00:17:26.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.480 11:55:20 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.480 11:55:20 -- nvmf/common.sh@7 -- # uname -s 00:17:26.480 11:55:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.480 11:55:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.480 11:55:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.480 11:55:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.480 11:55:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.480 11:55:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.480 11:55:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.480 11:55:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.480 11:55:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.480 11:55:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.480 11:55:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.480 11:55:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.480 11:55:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.480 11:55:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.480 11:55:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.480 11:55:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.480 11:55:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.480 11:55:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.480 11:55:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.481 11:55:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.481 11:55:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.481 11:55:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.481 11:55:20 -- paths/export.sh@5 -- # export PATH 00:17:26.481 11:55:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.481 11:55:20 -- nvmf/common.sh@46 -- # : 0 00:17:26.481 11:55:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:26.481 11:55:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:26.481 11:55:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:26.481 11:55:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.481 11:55:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.481 11:55:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:26.481 11:55:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:26.481 11:55:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:26.481 11:55:20 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:26.481 11:55:20 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:26.481 11:55:20 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.481 11:55:20 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:26.481 11:55:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:26.481 11:55:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.481 11:55:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:26.481 11:55:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:26.481 11:55:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:26.481 11:55:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.481 11:55:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.481 11:55:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.481 11:55:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:26.481 11:55:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:26.481 11:55:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:26.481 11:55:20 -- common/autotest_common.sh@10 -- # set +x 00:17:34.652 11:55:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:34.652 11:55:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:34.652 11:55:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:34.652 11:55:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:34.652 11:55:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:34.652 11:55:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:34.652 11:55:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:34.652 11:55:27 -- nvmf/common.sh@294 -- # net_devs=() 00:17:34.652 11:55:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:34.652 11:55:27 -- nvmf/common.sh@295 -- # e810=() 00:17:34.652 11:55:27 -- nvmf/common.sh@295 -- # local -ga e810 00:17:34.652 11:55:27 -- nvmf/common.sh@296 -- # x722=() 00:17:34.652 11:55:27 -- nvmf/common.sh@296 -- # local -ga x722 00:17:34.652 11:55:27 -- nvmf/common.sh@297 -- # mlx=() 00:17:34.652 11:55:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:34.652 11:55:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.652 11:55:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:34.652 11:55:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:34.652 11:55:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:34.652 11:55:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:34.652 11:55:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:34.652 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:34.652 11:55:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:34.652 11:55:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:34.652 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:34.652 11:55:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:34.652 11:55:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:34.652 11:55:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.652 11:55:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:34.652 11:55:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.652 11:55:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:34.652 Found net devices under 0000:31:00.0: cvl_0_0 00:17:34.652 11:55:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.652 11:55:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:34.652 11:55:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.652 11:55:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:34.652 11:55:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.652 11:55:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:34.652 Found net devices under 0000:31:00.1: cvl_0_1 00:17:34.652 11:55:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.652 11:55:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:34.652 11:55:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:34.652 11:55:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:34.652 11:55:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.652 11:55:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.652 11:55:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.652 11:55:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:34.652 11:55:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.652 11:55:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.652 11:55:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:34.652 11:55:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.652 11:55:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.652 11:55:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:34.652 11:55:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:34.652 11:55:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.652 11:55:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.652 11:55:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.652 11:55:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.652 11:55:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:34.652 11:55:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.652 11:55:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.652 11:55:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.652 11:55:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:34.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:17:34.652 00:17:34.652 --- 10.0.0.2 ping statistics --- 00:17:34.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.652 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:17:34.652 11:55:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:17:34.652 00:17:34.652 --- 10.0.0.1 ping statistics --- 00:17:34.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.652 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:17:34.652 11:55:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.652 11:55:27 -- nvmf/common.sh@410 -- # return 0 00:17:34.652 11:55:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:34.652 11:55:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.652 11:55:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:34.652 11:55:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.652 11:55:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:34.652 11:55:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:34.652 11:55:27 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:34.652 11:55:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:34.652 11:55:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:34.652 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:17:34.652 11:55:27 -- nvmf/common.sh@469 -- # nvmfpid=1924707 00:17:34.652 11:55:27 -- nvmf/common.sh@470 -- # waitforlisten 1924707 00:17:34.652 11:55:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.652 11:55:27 -- common/autotest_common.sh@819 -- # '[' -z 1924707 ']' 00:17:34.652 11:55:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.652 11:55:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:34.652 11:55:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.652 11:55:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:34.652 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:17:34.652 [2024-06-10 11:55:27.468648] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:34.652 [2024-06-10 11:55:27.468709] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.652 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.652 [2024-06-10 11:55:27.555262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.652 [2024-06-10 11:55:27.646134] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:34.652 [2024-06-10 11:55:27.646293] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.652 [2024-06-10 11:55:27.646302] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.652 [2024-06-10 11:55:27.646310] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.652 [2024-06-10 11:55:27.646340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.652 11:55:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.652 11:55:28 -- common/autotest_common.sh@852 -- # return 0 00:17:34.652 11:55:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.652 11:55:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:34.652 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.653 11:55:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.653 11:55:28 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.653 11:55:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.653 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.653 [2024-06-10 11:55:28.293789] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.653 11:55:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.653 11:55:28 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:34.653 11:55:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.653 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.653 Malloc0 00:17:34.653 11:55:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.653 11:55:28 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.653 11:55:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.653 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.653 11:55:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.653 11:55:28 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:34.653 11:55:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.653 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.653 11:55:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.653 11:55:28 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.653 11:55:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.653 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.653 [2024-06-10 11:55:28.368249] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.653 11:55:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.653 11:55:28 -- target/queue_depth.sh@30 -- # bdevperf_pid=1924848 00:17:34.653 11:55:28 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:34.653 11:55:28 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:34.653 11:55:28 -- target/queue_depth.sh@33 -- # waitforlisten 1924848 /var/tmp/bdevperf.sock 00:17:34.653 11:55:28 -- common/autotest_common.sh@819 -- # '[' -z 1924848 ']' 00:17:34.653 11:55:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.653 11:55:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:34.653 11:55:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.653 11:55:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:34.653 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.653 [2024-06-10 11:55:28.418624] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:34.653 [2024-06-10 11:55:28.418689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924848 ] 00:17:34.913 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.913 [2024-06-10 11:55:28.482814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.913 [2024-06-10 11:55:28.554276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.487 11:55:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:35.487 11:55:29 -- common/autotest_common.sh@852 -- # return 0 00:17:35.487 11:55:29 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:35.487 11:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:35.487 11:55:29 -- common/autotest_common.sh@10 -- # set +x 00:17:35.747 NVMe0n1 00:17:35.747 11:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:35.747 11:55:29 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:35.747 Running I/O for 10 seconds... 00:17:45.751 00:17:45.751 Latency(us) 00:17:45.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.751 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:45.751 Verification LBA range: start 0x0 length 0x4000 00:17:45.751 NVMe0n1 : 10.04 19427.22 75.89 0.00 0.00 52559.12 9939.63 53084.16 00:17:45.751 =================================================================================================================== 00:17:45.751 Total : 19427.22 75.89 0.00 0.00 52559.12 9939.63 53084.16 00:17:45.751 0 00:17:45.751 11:55:39 -- target/queue_depth.sh@39 -- # killprocess 1924848 00:17:45.751 11:55:39 -- common/autotest_common.sh@926 -- # '[' -z 1924848 ']' 00:17:45.751 11:55:39 -- common/autotest_common.sh@930 -- # kill -0 1924848 00:17:45.751 11:55:39 -- common/autotest_common.sh@931 -- # uname 00:17:45.751 11:55:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:45.751 11:55:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1924848 00:17:45.751 11:55:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:45.751 11:55:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:45.751 11:55:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1924848' 00:17:45.751 killing process with pid 1924848 00:17:45.751 11:55:39 -- common/autotest_common.sh@945 -- # kill 1924848 00:17:45.751 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.751 00:17:45.751 Latency(us) 00:17:45.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.751 =================================================================================================================== 00:17:45.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.751 11:55:39 -- common/autotest_common.sh@950 -- # wait 1924848 00:17:46.011 11:55:39 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:46.011 11:55:39 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:46.011 11:55:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:46.011 11:55:39 -- nvmf/common.sh@116 -- # sync 00:17:46.011 11:55:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:46.011 11:55:39 -- nvmf/common.sh@119 -- # set +e 00:17:46.011 11:55:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:46.011 11:55:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:46.011 rmmod nvme_tcp 00:17:46.011 rmmod nvme_fabrics 00:17:46.011 rmmod nvme_keyring 00:17:46.011 11:55:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:46.011 11:55:39 -- nvmf/common.sh@123 -- # set -e 00:17:46.011 11:55:39 -- nvmf/common.sh@124 -- # return 0 00:17:46.011 11:55:39 -- nvmf/common.sh@477 -- # '[' -n 1924707 ']' 00:17:46.011 11:55:39 -- nvmf/common.sh@478 -- # killprocess 1924707 00:17:46.011 11:55:39 -- common/autotest_common.sh@926 -- # '[' -z 1924707 ']' 00:17:46.011 11:55:39 -- common/autotest_common.sh@930 -- # kill -0 1924707 00:17:46.011 11:55:39 -- common/autotest_common.sh@931 -- # uname 00:17:46.011 11:55:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:46.011 11:55:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1924707 00:17:46.011 11:55:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:46.011 11:55:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:46.011 11:55:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1924707' 00:17:46.011 killing process with pid 1924707 00:17:46.011 11:55:39 -- common/autotest_common.sh@945 -- # kill 1924707 00:17:46.011 11:55:39 -- common/autotest_common.sh@950 -- # wait 1924707 00:17:46.272 11:55:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:46.272 11:55:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:46.272 11:55:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:46.272 11:55:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.272 11:55:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:46.272 11:55:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.272 11:55:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.272 11:55:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.202 11:55:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:48.202 00:17:48.202 real 0m21.912s 00:17:48.202 user 0m25.421s 00:17:48.202 sys 0m6.468s 00:17:48.202 11:55:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.202 11:55:41 -- common/autotest_common.sh@10 -- # set +x 00:17:48.202 ************************************ 00:17:48.202 END TEST nvmf_queue_depth 00:17:48.202 ************************************ 00:17:48.202 11:55:41 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:48.202 11:55:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:48.202 11:55:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:48.202 11:55:41 -- common/autotest_common.sh@10 -- # set +x 00:17:48.463 ************************************ 00:17:48.463 START TEST nvmf_multipath 00:17:48.463 ************************************ 00:17:48.463 11:55:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:48.463 * Looking for test storage... 00:17:48.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.463 11:55:42 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.463 11:55:42 -- nvmf/common.sh@7 -- # uname -s 00:17:48.463 11:55:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.463 11:55:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.463 11:55:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.463 11:55:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.463 11:55:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.463 11:55:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.463 11:55:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.463 11:55:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.463 11:55:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.463 11:55:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.463 11:55:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.463 11:55:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.463 11:55:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.463 11:55:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.463 11:55:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.463 11:55:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.463 11:55:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.463 11:55:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.463 11:55:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.463 11:55:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.463 11:55:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.463 11:55:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.463 11:55:42 -- paths/export.sh@5 -- # export PATH 00:17:48.463 11:55:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.463 11:55:42 -- nvmf/common.sh@46 -- # : 0 00:17:48.463 11:55:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:48.463 11:55:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:48.463 11:55:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:48.463 11:55:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.463 11:55:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.463 11:55:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:48.463 11:55:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:48.463 11:55:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:48.463 11:55:42 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:48.463 11:55:42 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:48.463 11:55:42 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:48.463 11:55:42 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.463 11:55:42 -- target/multipath.sh@43 -- # nvmftestinit 00:17:48.463 11:55:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:48.463 11:55:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.463 11:55:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:48.463 11:55:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:48.463 11:55:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:48.463 11:55:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.463 11:55:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.463 11:55:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.463 11:55:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:48.463 11:55:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:48.463 11:55:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:48.463 11:55:42 -- common/autotest_common.sh@10 -- # set +x 00:17:56.669 11:55:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:56.669 11:55:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:56.669 11:55:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:56.669 11:55:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:56.669 11:55:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:56.669 11:55:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:56.669 11:55:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:56.669 11:55:49 -- nvmf/common.sh@294 -- # net_devs=() 00:17:56.669 11:55:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:56.669 11:55:49 -- nvmf/common.sh@295 -- # e810=() 00:17:56.669 11:55:49 -- nvmf/common.sh@295 -- # local -ga e810 00:17:56.669 11:55:49 -- nvmf/common.sh@296 -- # x722=() 00:17:56.669 11:55:49 -- nvmf/common.sh@296 -- # local -ga x722 00:17:56.669 11:55:49 -- nvmf/common.sh@297 -- # mlx=() 00:17:56.669 11:55:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:56.669 11:55:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.669 11:55:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.669 11:55:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.669 11:55:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.669 11:55:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.669 11:55:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.669 11:55:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.670 11:55:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.670 11:55:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.670 11:55:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.670 11:55:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.670 11:55:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:56.670 11:55:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:56.670 11:55:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:56.670 11:55:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.670 11:55:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:56.670 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:56.670 11:55:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.670 11:55:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:56.670 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:56.670 11:55:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:56.670 11:55:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.670 11:55:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.670 11:55:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.670 11:55:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.670 11:55:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:56.670 Found net devices under 0000:31:00.0: cvl_0_0 00:17:56.670 11:55:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.670 11:55:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.670 11:55:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.670 11:55:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.670 11:55:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.670 11:55:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:56.670 Found net devices under 0000:31:00.1: cvl_0_1 00:17:56.670 11:55:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.670 11:55:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:56.670 11:55:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:56.670 11:55:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:56.670 11:55:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.670 11:55:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.670 11:55:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.670 11:55:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:56.670 11:55:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.670 11:55:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.670 11:55:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:56.670 11:55:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.670 11:55:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.670 11:55:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:56.670 11:55:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:56.670 11:55:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.670 11:55:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.670 11:55:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.670 11:55:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.670 11:55:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:56.670 11:55:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.670 11:55:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.670 11:55:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.670 11:55:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:56.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:17:56.670 00:17:56.670 --- 10.0.0.2 ping statistics --- 00:17:56.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.670 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:17:56.670 11:55:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:17:56.670 00:17:56.670 --- 10.0.0.1 ping statistics --- 00:17:56.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.670 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:17:56.670 11:55:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.670 11:55:49 -- nvmf/common.sh@410 -- # return 0 00:17:56.670 11:55:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:56.670 11:55:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.670 11:55:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.670 11:55:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:56.670 11:55:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:56.670 11:55:49 -- target/multipath.sh@45 -- # '[' -z ']' 00:17:56.670 11:55:49 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:56.670 only one NIC for nvmf test 00:17:56.670 11:55:49 -- target/multipath.sh@47 -- # nvmftestfini 00:17:56.670 11:55:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:56.670 11:55:49 -- nvmf/common.sh@116 -- # sync 00:17:56.670 11:55:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:56.670 11:55:49 -- nvmf/common.sh@119 -- # set +e 00:17:56.670 11:55:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:56.670 11:55:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:56.670 rmmod nvme_tcp 00:17:56.670 rmmod nvme_fabrics 00:17:56.670 rmmod nvme_keyring 00:17:56.670 11:55:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:56.670 11:55:49 -- nvmf/common.sh@123 -- # set -e 00:17:56.670 11:55:49 -- nvmf/common.sh@124 -- # return 0 00:17:56.670 11:55:49 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:56.670 11:55:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:56.670 11:55:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:56.670 11:55:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.670 11:55:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:56.670 11:55:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.670 11:55:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.670 11:55:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.055 11:55:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:58.056 11:55:51 -- target/multipath.sh@48 -- # exit 0 00:17:58.056 11:55:51 -- target/multipath.sh@1 -- # nvmftestfini 00:17:58.056 11:55:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:58.056 11:55:51 -- nvmf/common.sh@116 -- # sync 00:17:58.056 11:55:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:58.056 11:55:51 -- nvmf/common.sh@119 -- # set +e 00:17:58.056 11:55:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:58.056 11:55:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:58.056 11:55:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:58.056 11:55:51 -- nvmf/common.sh@123 -- # set -e 00:17:58.056 11:55:51 -- nvmf/common.sh@124 -- # return 0 00:17:58.056 11:55:51 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:58.056 11:55:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:58.056 11:55:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:58.056 11:55:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:58.056 11:55:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.056 11:55:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:58.056 11:55:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.056 11:55:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.056 11:55:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.056 11:55:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:58.056 00:17:58.056 real 0m9.607s 00:17:58.056 user 0m2.086s 00:17:58.056 sys 0m5.419s 00:17:58.056 11:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.056 11:55:51 -- common/autotest_common.sh@10 -- # set +x 00:17:58.056 ************************************ 00:17:58.056 END TEST nvmf_multipath 00:17:58.056 ************************************ 00:17:58.056 11:55:51 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:58.056 11:55:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:58.056 11:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:58.056 11:55:51 -- common/autotest_common.sh@10 -- # set +x 00:17:58.056 ************************************ 00:17:58.056 START TEST nvmf_zcopy 00:17:58.056 ************************************ 00:17:58.056 11:55:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:58.056 * Looking for test storage... 00:17:58.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.056 11:55:51 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.056 11:55:51 -- nvmf/common.sh@7 -- # uname -s 00:17:58.056 11:55:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.056 11:55:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.056 11:55:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.056 11:55:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.056 11:55:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.056 11:55:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.056 11:55:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.056 11:55:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.056 11:55:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.056 11:55:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.056 11:55:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.056 11:55:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.056 11:55:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.056 11:55:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.056 11:55:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.056 11:55:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.056 11:55:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.056 11:55:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.056 11:55:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.056 11:55:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.056 11:55:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.056 11:55:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.056 11:55:51 -- paths/export.sh@5 -- # export PATH 00:17:58.056 11:55:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.056 11:55:51 -- nvmf/common.sh@46 -- # : 0 00:17:58.056 11:55:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:58.056 11:55:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:58.056 11:55:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:58.056 11:55:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.056 11:55:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.056 11:55:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:58.056 11:55:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:58.056 11:55:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:58.056 11:55:51 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:58.056 11:55:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:58.056 11:55:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.056 11:55:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:58.056 11:55:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:58.056 11:55:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:58.057 11:55:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.057 11:55:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.057 11:55:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.057 11:55:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:58.057 11:55:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:58.057 11:55:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:58.057 11:55:51 -- common/autotest_common.sh@10 -- # set +x 00:18:06.201 11:55:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:06.201 11:55:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:06.201 11:55:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:06.201 11:55:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:06.201 11:55:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:06.201 11:55:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:06.201 11:55:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:06.201 11:55:58 -- nvmf/common.sh@294 -- # net_devs=() 00:18:06.201 11:55:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:06.201 11:55:58 -- nvmf/common.sh@295 -- # e810=() 00:18:06.201 11:55:58 -- nvmf/common.sh@295 -- # local -ga e810 00:18:06.201 11:55:58 -- nvmf/common.sh@296 -- # x722=() 00:18:06.201 11:55:58 -- nvmf/common.sh@296 -- # local -ga x722 00:18:06.201 11:55:58 -- nvmf/common.sh@297 -- # mlx=() 00:18:06.201 11:55:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:06.201 11:55:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.201 11:55:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.201 11:55:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.201 11:55:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.201 11:55:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.201 11:55:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.201 11:55:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.202 11:55:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.202 11:55:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.202 11:55:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.202 11:55:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.202 11:55:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:06.202 11:55:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:06.202 11:55:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:06.202 11:55:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:06.202 11:55:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:06.202 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:06.202 11:55:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:06.202 11:55:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:06.202 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:06.202 11:55:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:06.202 11:55:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:06.202 11:55:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.202 11:55:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:06.202 11:55:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.202 11:55:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:06.202 Found net devices under 0000:31:00.0: cvl_0_0 00:18:06.202 11:55:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.202 11:55:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:06.202 11:55:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.202 11:55:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:06.202 11:55:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.202 11:55:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:06.202 Found net devices under 0000:31:00.1: cvl_0_1 00:18:06.202 11:55:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.202 11:55:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:06.202 11:55:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:06.202 11:55:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:06.202 11:55:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.202 11:55:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.202 11:55:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.202 11:55:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:06.202 11:55:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.202 11:55:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.202 11:55:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:06.202 11:55:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.202 11:55:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.202 11:55:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:06.202 11:55:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:06.202 11:55:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.202 11:55:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.202 11:55:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.202 11:55:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.202 11:55:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:06.202 11:55:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.202 11:55:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.202 11:55:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.202 11:55:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:06.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:18:06.202 00:18:06.202 --- 10.0.0.2 ping statistics --- 00:18:06.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.202 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:18:06.202 11:55:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:18:06.202 00:18:06.202 --- 10.0.0.1 ping statistics --- 00:18:06.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.202 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:18:06.202 11:55:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.202 11:55:58 -- nvmf/common.sh@410 -- # return 0 00:18:06.202 11:55:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:06.202 11:55:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.202 11:55:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:06.202 11:55:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.202 11:55:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:06.202 11:55:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:06.202 11:55:58 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:06.202 11:55:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:06.202 11:55:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:06.202 11:55:58 -- common/autotest_common.sh@10 -- # set +x 00:18:06.202 11:55:58 -- nvmf/common.sh@469 -- # nvmfpid=1935680 00:18:06.202 11:55:58 -- nvmf/common.sh@470 -- # waitforlisten 1935680 00:18:06.202 11:55:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:06.202 11:55:58 -- common/autotest_common.sh@819 -- # '[' -z 1935680 ']' 00:18:06.202 11:55:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.202 11:55:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:06.202 11:55:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.202 11:55:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:06.202 11:55:58 -- common/autotest_common.sh@10 -- # set +x 00:18:06.202 [2024-06-10 11:55:59.010511] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:06.202 [2024-06-10 11:55:59.010573] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.202 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.202 [2024-06-10 11:55:59.097473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.202 [2024-06-10 11:55:59.187497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:06.202 [2024-06-10 11:55:59.187645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.202 [2024-06-10 11:55:59.187655] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.202 [2024-06-10 11:55:59.187662] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.202 [2024-06-10 11:55:59.187686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.202 11:55:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:06.202 11:55:59 -- common/autotest_common.sh@852 -- # return 0 00:18:06.202 11:55:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:06.202 11:55:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:06.202 11:55:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.202 11:55:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.202 11:55:59 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:06.202 11:55:59 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:06.202 11:55:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:06.202 11:55:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.202 [2024-06-10 11:55:59.838157] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.202 11:55:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:06.202 11:55:59 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:06.202 11:55:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:06.202 11:55:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.202 11:55:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:06.202 11:55:59 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.202 11:55:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:06.203 11:55:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.203 [2024-06-10 11:55:59.854340] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.203 11:55:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:06.203 11:55:59 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:06.203 11:55:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:06.203 11:55:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.203 11:55:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:06.203 11:55:59 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:06.203 11:55:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:06.203 11:55:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.203 malloc0 00:18:06.203 11:55:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:06.203 11:55:59 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.203 11:55:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:06.203 11:55:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.203 11:55:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:06.203 11:55:59 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:06.203 11:55:59 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:06.203 11:55:59 -- nvmf/common.sh@520 -- # config=() 00:18:06.203 11:55:59 -- nvmf/common.sh@520 -- # local subsystem config 00:18:06.203 11:55:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:06.203 11:55:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:06.203 { 00:18:06.203 "params": { 00:18:06.203 "name": "Nvme$subsystem", 00:18:06.203 "trtype": "$TEST_TRANSPORT", 00:18:06.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.203 "adrfam": "ipv4", 00:18:06.203 "trsvcid": "$NVMF_PORT", 00:18:06.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.203 "hdgst": ${hdgst:-false}, 00:18:06.203 "ddgst": ${ddgst:-false} 00:18:06.203 }, 00:18:06.203 "method": "bdev_nvme_attach_controller" 00:18:06.203 } 00:18:06.203 EOF 00:18:06.203 )") 00:18:06.203 11:55:59 -- nvmf/common.sh@542 -- # cat 00:18:06.203 11:55:59 -- nvmf/common.sh@544 -- # jq . 00:18:06.203 11:55:59 -- nvmf/common.sh@545 -- # IFS=, 00:18:06.203 11:55:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:06.203 "params": { 00:18:06.203 "name": "Nvme1", 00:18:06.203 "trtype": "tcp", 00:18:06.203 "traddr": "10.0.0.2", 00:18:06.203 "adrfam": "ipv4", 00:18:06.203 "trsvcid": "4420", 00:18:06.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.203 "hdgst": false, 00:18:06.203 "ddgst": false 00:18:06.203 }, 00:18:06.203 "method": "bdev_nvme_attach_controller" 00:18:06.203 }' 00:18:06.203 [2024-06-10 11:55:59.937719] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:06.203 [2024-06-10 11:55:59.937792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1935711 ] 00:18:06.203 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.464 [2024-06-10 11:56:00.005681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.464 [2024-06-10 11:56:00.084079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.464 Running I/O for 10 seconds... 00:18:18.698 00:18:18.698 Latency(us) 00:18:18.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.698 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:18.698 Verification LBA range: start 0x0 length 0x1000 00:18:18.698 Nvme1n1 : 10.01 12686.27 99.11 0.00 0.00 10061.47 1392.64 18131.63 00:18:18.698 =================================================================================================================== 00:18:18.698 Total : 12686.27 99.11 0.00 0.00 10061.47 1392.64 18131.63 00:18:18.698 11:56:10 -- target/zcopy.sh@39 -- # perfpid=1937780 00:18:18.698 11:56:10 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:18.698 11:56:10 -- common/autotest_common.sh@10 -- # set +x 00:18:18.698 11:56:10 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:18.698 11:56:10 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:18.698 11:56:10 -- nvmf/common.sh@520 -- # config=() 00:18:18.698 11:56:10 -- nvmf/common.sh@520 -- # local subsystem config 00:18:18.698 11:56:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:18.698 11:56:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:18.698 { 00:18:18.698 "params": { 00:18:18.698 "name": "Nvme$subsystem", 00:18:18.698 "trtype": "$TEST_TRANSPORT", 00:18:18.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.698 "adrfam": "ipv4", 00:18:18.698 "trsvcid": "$NVMF_PORT", 00:18:18.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.698 "hdgst": ${hdgst:-false}, 00:18:18.698 "ddgst": ${ddgst:-false} 00:18:18.698 }, 00:18:18.698 "method": "bdev_nvme_attach_controller" 00:18:18.698 } 00:18:18.698 EOF 00:18:18.698 )") 00:18:18.698 11:56:10 -- nvmf/common.sh@542 -- # cat 00:18:18.698 [2024-06-10 11:56:10.396925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.396952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 11:56:10 -- nvmf/common.sh@544 -- # jq . 00:18:18.698 [2024-06-10 11:56:10.404919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.404927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 11:56:10 -- nvmf/common.sh@545 -- # IFS=, 00:18:18.698 11:56:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:18.698 "params": { 00:18:18.698 "name": "Nvme1", 00:18:18.698 "trtype": "tcp", 00:18:18.698 "traddr": "10.0.0.2", 00:18:18.698 "adrfam": "ipv4", 00:18:18.698 "trsvcid": "4420", 00:18:18.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.698 "hdgst": false, 00:18:18.698 "ddgst": false 00:18:18.698 }, 00:18:18.698 "method": "bdev_nvme_attach_controller" 00:18:18.698 }' 00:18:18.698 [2024-06-10 11:56:10.412937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.412944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.420958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.420965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.428979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.428986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.433042] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:18.698 [2024-06-10 11:56:10.433098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1937780 ] 00:18:18.698 [2024-06-10 11:56:10.437000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.437008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.445021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.445028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.453041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.453049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.698 [2024-06-10 11:56:10.461062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.461070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.469083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.469091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.477105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.477116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.485127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.485135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.492786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.698 [2024-06-10 11:56:10.493146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.493152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.501167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.501175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.509187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.509195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.517208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.517217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.525229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.525240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.533253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.533262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.541275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.541282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.549292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.549300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.554898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.698 [2024-06-10 11:56:10.557313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.557320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.565334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.565342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.573359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.573371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.581374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.581383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.589395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.589403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.597413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.597421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.605434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.605442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.613455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.613462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.621476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.621487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.629507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.629520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.637520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.637529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.645541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.645549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.653562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.653572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.661584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.661592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.669605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.669612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.677625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.677632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.685646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.685653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.693669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.693676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.701689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.701696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.709712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.709721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.717733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.717739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.725755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.725762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.733775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.733782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.741797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.741803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.749819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.749825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.757841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.757849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.765862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.765869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.773884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.773895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.781904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.781911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.789926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.789933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.797947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.797955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.805968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.805975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.813999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.814013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 Running I/O for 5 seconds... 00:18:18.698 [2024-06-10 11:56:10.822012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.822019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.833931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.833946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.841425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.841439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.849711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.698 [2024-06-10 11:56:10.849726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.698 [2024-06-10 11:56:10.858081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.858096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.866503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.866518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.875638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.875652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.883768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.883783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.892501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.892516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.901280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.901294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.909908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.909923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.918634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.918649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.927673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.927689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.936278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.936296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.945112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.945127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.953757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.953771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.962270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.962285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.971381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.971395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.980067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.980081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.988461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.988476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:10.997128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:10.997143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.005995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.006009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.013429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.013443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.023548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.023562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.031003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.031017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.039761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.039775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.048502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.048516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.056656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.056670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.064943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.064957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.073292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.073307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.081534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.081548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.090054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.090069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.098851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.098868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.107610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.107624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.116095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.116109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.124498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.124512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.132946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.132960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.141457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.141471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.150029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.150044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.158578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.158592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.167110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.167124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.175646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.175660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.184182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.184196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.192575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.192589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.200941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.200955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.209632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.209647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.218868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.218882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.227065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.227079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.236300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.236314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.244544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.244558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.252829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.252843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.261832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.261847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.269832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.269847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.278331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.278346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.287287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.287301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.295818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.295832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.304529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.304544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.313130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.313144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.321818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.321833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.330603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.330617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.339000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.339014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.347199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.347213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.355857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.355871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.364851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.364865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.373000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.373014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.381809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.381823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.390888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.390902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.398352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.398366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.407332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.407346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.415975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.415989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.424555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.424569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.433152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.433166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.442101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.442115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.450730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.450744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.459166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.459180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.467815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.467829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.476528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.476542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.485321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.485335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.493716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.493730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.502574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.502588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.511115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.511129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.519608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.519622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.528209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.528223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.537047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.537061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.545395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.545409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.553612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.553627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.562673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.562687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.570875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.570889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.579585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.579600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.587696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.587710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.596070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.596084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.604578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.604593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.612989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.613003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.621536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.621550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.630052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.630066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.638844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.638858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.647170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.647184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.655875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.655889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.664661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.664675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.673010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.673024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.681345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.681359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.690136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.699 [2024-06-10 11:56:11.690149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.699 [2024-06-10 11:56:11.698403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.698418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.706396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.706410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.714757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.714770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.723626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.723640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.731812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.731826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.740496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.740513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.749027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.749042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.757685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.757699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.766356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.766370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.774733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.774746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.783420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.783433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.792521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.792536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.800798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.800811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.809306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.809320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.818237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.818255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.827327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.827341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.835436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.835450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.843833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.843847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.852360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.852374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.860781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.860795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.869002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.869016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.877423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.877438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.886157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.886172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.894587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.894600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.902968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.902986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.911729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.911744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.920051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.920065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.928732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.928746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.937589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.937603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.945991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.946005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.954497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.954511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.962953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.962968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.971607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.971621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.979829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.979842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.988624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.988638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:11.997415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:11.997430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.006060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.006074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.014883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.014897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.023057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.023071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.031716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.031731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.040553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.040568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.049337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.049352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.057683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.057697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.066651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.066669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.075155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.075169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.083822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.083837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.092710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.092724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.100708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.100722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.109463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.109477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.118510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.118524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.126355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.126369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.135522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.135537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.144214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.144229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.152952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.152968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.161517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.161532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.170424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.170438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.179284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.179297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.187926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.187941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.196618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.196633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.205694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.205708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.214480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.214494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.223062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.223076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.231884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.231902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.240584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.240598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.249024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.249038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.257789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.257803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.266708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.266722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.275389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.275403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.283602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.283616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.292087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.292101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.300714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.300728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.309809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.309823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.318099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.318114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.326786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.326801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.335597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.335612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.344521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.344536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.353053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.353068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.361718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.361732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.370259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.370273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.378951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.378965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.387449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.387463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.396024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.396042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.404604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.404619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.413575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.413589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.421881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.700 [2024-06-10 11:56:12.421896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.700 [2024-06-10 11:56:12.430457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.701 [2024-06-10 11:56:12.430471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.701 [2024-06-10 11:56:12.438890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.701 [2024-06-10 11:56:12.438904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.701 [2024-06-10 11:56:12.447371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.701 [2024-06-10 11:56:12.447385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.701 [2024-06-10 11:56:12.456025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.701 [2024-06-10 11:56:12.456039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.701 [2024-06-10 11:56:12.464827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.701 [2024-06-10 11:56:12.464841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.473564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.473579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.482262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.482277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.491024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.491038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.499975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.499989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.508540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.508555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.517285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.517300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.526027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.526042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.534948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.534962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.543208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.543222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.551327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.551341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.559914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.559928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.568426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.568440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.577063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.577077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.585627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.585642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.594272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.594286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.602727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.602741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.611428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.611442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.620131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.620146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.628751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.628765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.637287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.637302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.645860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.645874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.654554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.654568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.662875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.662889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.671780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.671795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.680349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.680363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.689292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.689306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.697821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.697835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.706185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.706199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.714646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.714660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.723559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.723574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.961 [2024-06-10 11:56:12.732212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.961 [2024-06-10 11:56:12.732225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.740558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.740573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.749109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.749122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.757671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.757685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.766439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.766453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.775258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.775272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.783923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.783937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.792084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.792098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.800820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.800834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.809849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.809863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.818697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.818711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.827390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.827404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.835940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.835954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.844531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.844545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.853232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.853252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.861167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.861181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.869997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.870011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.878267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.878280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.886934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.886948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.895404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.895417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.904061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.904075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.912363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.912377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.920848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.920863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.929276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.929290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.938047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.938061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.946197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.946210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.954528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.954542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.963409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.963423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.972092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.972106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.980611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.980625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.222 [2024-06-10 11:56:12.989222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.222 [2024-06-10 11:56:12.989236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:12.998331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:12.998346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.006889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.006903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.015443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.015459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.024002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.024017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.032356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.032371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.041426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.041445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.049630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.049644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.058106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.058120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.067205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.067219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.075704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.075718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.084218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.084232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.092670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.092684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.101520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.101534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.110369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.110383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.119108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.119122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.127749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.127763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.136042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.136056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.144494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.144509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.153128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.153142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.161800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.161814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.170369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.170382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.178980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.178994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.187470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.187484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.195848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.195862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.204218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.204235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.212915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.212928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.221826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.221840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.230320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.230334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.238954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.238969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.483 [2024-06-10 11:56:13.247760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.483 [2024-06-10 11:56:13.247775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.256194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.256208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.264797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.264811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.273417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.273431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.282534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.282548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.290886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.290900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.299262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.299276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.307913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.307927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.316309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.316323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.325131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.325145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.333612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.333626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.342304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.342319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.350831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.350845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.359470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.359484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.368018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.368036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.376631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.376645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.385171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.385185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.393705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.393719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.401875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.401889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.410500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.410514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.419453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.419466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.427836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.427850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.744 [2024-06-10 11:56:13.436555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.744 [2024-06-10 11:56:13.436569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.745 [2024-06-10 11:56:13.445599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.745 [2024-06-10 11:56:13.445613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.745 [2024-06-10 11:56:13.453966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.745 [2024-06-10 11:56:13.453980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.745 [2024-06-10 11:56:13.462420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.745 [2024-06-10 11:56:13.462434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.745 [2024-06-10 11:56:13.471143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.745 [2024-06-10 11:56:13.471157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.745 [2024-06-10 11:56:13.478885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.745 [2024-06-10 11:56:13.478899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.745 [2024-06-10 11:56:13.487816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.745 [2024-06-10 11:56:13.487831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.745 [2024-06-10 11:56:13.496521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.745 [2024-06-10 11:56:13.496535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.745 [2024-06-10 11:56:13.505632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.745 [2024-06-10 11:56:13.505646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.745 [2024-06-10 11:56:13.514183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.745 [2024-06-10 11:56:13.514198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.522479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.522493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.531148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.531166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.539611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.539626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.547947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.547961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.556379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.556393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.564755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.564769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.573574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.573588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.582327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.582341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.590724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.590738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.599700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.599715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.608275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.608289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.616806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.616820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.625487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.625502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.634304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.634318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.643063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.643078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.651578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.651592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.660534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.660548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.668444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.668458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.677284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.677298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.685874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.685888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.694363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.694381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.702957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.702971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.711473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.711488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.720461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.720477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.728494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.728509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.737338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.737353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.006 [2024-06-10 11:56:13.744890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.006 [2024-06-10 11:56:13.744905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.007 [2024-06-10 11:56:13.753850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.007 [2024-06-10 11:56:13.753865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.007 [2024-06-10 11:56:13.762256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.007 [2024-06-10 11:56:13.762271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.007 [2024-06-10 11:56:13.770779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.007 [2024-06-10 11:56:13.770794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.779745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.779759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.788175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.788189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.796711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.796726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.805356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.805371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.814574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.814588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.822757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.822771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.831712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.831726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.840261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.840276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.849021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.849035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.857422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.857437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.865721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.865736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.874097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.874111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.882713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.882728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.891177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.891191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.899940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.899954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.908487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.908501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.917108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.917122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.925753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.925768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.934603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.934617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.942594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.942608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.951101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.951115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.959419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.959434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.967839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.967853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.976487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.976502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.985069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.985083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:13.993504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:13.993519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:14.002028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.268 [2024-06-10 11:56:14.002043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.268 [2024-06-10 11:56:14.010481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.269 [2024-06-10 11:56:14.010496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.269 [2024-06-10 11:56:14.018910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.269 [2024-06-10 11:56:14.018924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.269 [2024-06-10 11:56:14.027828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.269 [2024-06-10 11:56:14.027842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.269 [2024-06-10 11:56:14.036340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.269 [2024-06-10 11:56:14.036354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.044701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.044716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.053426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.053442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.062127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.062141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.071094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.071108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.080320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.080334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.088941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.088955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.097797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.097811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.106199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.106214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.114552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.114566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.123190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.123205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.131575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.131589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.140002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.140016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.148588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.148603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.157432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.157446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.165740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.165754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.174253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.174267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.182535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.182549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.191433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.191448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.199723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.199737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.208413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.208427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.216902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.216916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.225546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.225561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.234644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.234659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.243077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.243092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.251880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.251895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.260220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.260234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.269275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.269289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.277569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.277584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.290772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.290787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.530 [2024-06-10 11:56:14.298979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.530 [2024-06-10 11:56:14.298993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.791 [2024-06-10 11:56:14.307698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.307713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.316449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.316463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.324833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.324847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.333231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.333251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.342199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.342217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.350265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.350280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.358895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.358909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.367392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.367406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.376077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.376091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.385015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.385029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.393359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.393373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.402093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.402107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.410929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.410943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.419550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.419564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.428328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.428342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.436432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.436446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.445266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.445280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.453819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.453832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.462747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.462761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.470688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.470702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.479690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.479704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.488214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.488227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.496309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.496323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.505093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.505110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.513340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.513354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.521978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.521992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.530799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.530813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.539185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.539198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.547728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.547741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.792 [2024-06-10 11:56:14.556603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.792 [2024-06-10 11:56:14.556617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.564968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.564982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.573364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.573378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.581690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.581704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.590062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.590076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.598391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.598405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.607177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.607191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.615658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.615672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.624205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.624220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.632656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.632671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.640990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.641004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.649558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.649572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.658146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.658160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.666593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.666611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.675101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.675115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.683578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.683592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.692025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.692040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.700381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.700395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.708774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.708788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.717504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.717518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.726222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.726236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.734675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.734689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.743317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.743331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.751815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.751829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.760801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.760815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.769159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.769173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.778034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.778048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.786419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.786433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.795515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.795529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.804105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.804119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.813015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.813029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.054 [2024-06-10 11:56:14.821232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.054 [2024-06-10 11:56:14.821250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.829688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.829706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.837836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.837850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.846686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.846700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.854312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.854326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.863465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.863479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.872231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.872249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.880604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.880619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.889002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.889016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.897428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.897442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.905951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.905964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.315 [2024-06-10 11:56:14.914500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.315 [2024-06-10 11:56:14.914514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:14.922958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:14.922972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:14.931752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:14.931766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:14.940384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:14.940398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:14.949060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:14.949074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:14.957480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:14.957493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:14.966354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:14.966368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:14.974978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:14.974992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:14.983691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:14.983705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:14.992509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:14.992526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.000462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.000476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.008994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.009008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.017881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.017895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.026917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.026932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.035402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.035416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.043805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.043819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.052523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.052537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.061260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.061274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.069931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.069945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.316 [2024-06-10 11:56:15.078515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.316 [2024-06-10 11:56:15.078529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.086880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.086895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.095777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.095791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.103913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.103927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.112757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.112771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.120973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.120988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.129915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.129929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.138119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.138132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.146953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.146967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.155305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.155319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.163785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.163799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.172386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.172400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.180966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.180980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.189471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.189485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.197404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.197418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.206235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.206253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.214709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.214722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.223330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.223344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.232200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.232214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.240451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.240465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.249254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.249268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.258048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.258062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.266505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.266519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.275054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.275068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.283540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.283553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.292275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.292289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.300948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.300962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.309060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.309074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.317621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.317635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.325948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.325962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.334727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.334742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.576 [2024-06-10 11:56:15.342917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.576 [2024-06-10 11:56:15.342931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.351381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.351395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.360022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.360037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.368531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.368545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.376823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.376837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.385205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.385219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.393856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.393870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.402215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.402229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.411273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.411288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.419986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.420000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.428112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.428127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.436627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.436642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.445047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.445062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.453544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.453559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.462070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.462085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.470639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.470654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.479080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.479094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.487509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.487523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.495940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.495954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.505141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.505156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.513602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.513616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.522366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.522381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.530985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.530999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.539952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.539967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.548188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.548202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.556635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.556650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.565295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.565309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.573980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.573995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.582758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.582773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.591173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.591188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.837 [2024-06-10 11:56:15.600009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.837 [2024-06-10 11:56:15.600023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.608545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.608560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.616956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.616970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.625549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.625563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.634235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.634256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.642732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.642746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.651293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.651308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.659732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.659747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.668106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.668121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.676761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.676776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.685121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.685136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.693618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.693632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.702519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.702534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.711168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.711183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.719745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.719759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.727950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.727965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.736502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.097 [2024-06-10 11:56:15.736517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.097 [2024-06-10 11:56:15.745021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.745036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.753811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.753825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.762502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.762517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.771118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.771133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.779693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.779707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.788109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.788123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.796580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.796598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.805466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.805480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.814203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.814216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.822828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.822842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.831534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.831549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.837557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.837571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 00:18:22.098 Latency(us) 00:18:22.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.098 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:22.098 Nvme1n1 : 5.01 20084.02 156.91 0.00 0.00 6367.37 2402.99 17803.95 00:18:22.098 =================================================================================================================== 00:18:22.098 Total : 20084.02 156.91 0.00 0.00 6367.37 2402.99 17803.95 00:18:22.098 [2024-06-10 11:56:15.845573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.845583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.853593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.853603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.098 [2024-06-10 11:56:15.861615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.098 [2024-06-10 11:56:15.861624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.869635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.869646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.877654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.877663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.885674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.885684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.893695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.893704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.901715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.901722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.909736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.909744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.917756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.917764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.925778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.925792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.933799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.933808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.941818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.941826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.949840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.949848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.957860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.957868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 [2024-06-10 11:56:15.965881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.358 [2024-06-10 11:56:15.965888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1937780) - No such process 00:18:22.358 11:56:15 -- target/zcopy.sh@49 -- # wait 1937780 00:18:22.359 11:56:15 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.359 11:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.359 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:18:22.359 11:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.359 11:56:15 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:22.359 11:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.359 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:18:22.359 delay0 00:18:22.359 11:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.359 11:56:15 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:22.359 11:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.359 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:18:22.359 11:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.359 11:56:15 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:22.359 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.618 [2024-06-10 11:56:16.142443] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:29.202 Initializing NVMe Controllers 00:18:29.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:29.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:29.202 Initialization complete. Launching workers. 00:18:29.202 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 144 00:18:29.202 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 424, failed to submit 40 00:18:29.202 success 239, unsuccess 185, failed 0 00:18:29.202 11:56:22 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:29.202 11:56:22 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:29.202 11:56:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:29.202 11:56:22 -- nvmf/common.sh@116 -- # sync 00:18:29.202 11:56:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:29.202 11:56:22 -- nvmf/common.sh@119 -- # set +e 00:18:29.202 11:56:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:29.202 11:56:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:29.202 rmmod nvme_tcp 00:18:29.202 rmmod nvme_fabrics 00:18:29.202 rmmod nvme_keyring 00:18:29.202 11:56:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:29.202 11:56:22 -- nvmf/common.sh@123 -- # set -e 00:18:29.202 11:56:22 -- nvmf/common.sh@124 -- # return 0 00:18:29.202 11:56:22 -- nvmf/common.sh@477 -- # '[' -n 1935680 ']' 00:18:29.202 11:56:22 -- nvmf/common.sh@478 -- # killprocess 1935680 00:18:29.202 11:56:22 -- common/autotest_common.sh@926 -- # '[' -z 1935680 ']' 00:18:29.202 11:56:22 -- common/autotest_common.sh@930 -- # kill -0 1935680 00:18:29.202 11:56:22 -- common/autotest_common.sh@931 -- # uname 00:18:29.202 11:56:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:29.202 11:56:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1935680 00:18:29.202 11:56:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:29.202 11:56:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:29.202 11:56:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1935680' 00:18:29.202 killing process with pid 1935680 00:18:29.202 11:56:22 -- common/autotest_common.sh@945 -- # kill 1935680 00:18:29.202 11:56:22 -- common/autotest_common.sh@950 -- # wait 1935680 00:18:29.202 11:56:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:29.202 11:56:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:29.202 11:56:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:29.202 11:56:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.202 11:56:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:29.202 11:56:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.202 11:56:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.202 11:56:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.119 11:56:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:31.119 00:18:31.119 real 0m32.952s 00:18:31.119 user 0m44.949s 00:18:31.119 sys 0m10.006s 00:18:31.119 11:56:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.119 11:56:24 -- common/autotest_common.sh@10 -- # set +x 00:18:31.119 ************************************ 00:18:31.119 END TEST nvmf_zcopy 00:18:31.119 ************************************ 00:18:31.119 11:56:24 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:31.119 11:56:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:31.119 11:56:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:31.119 11:56:24 -- common/autotest_common.sh@10 -- # set +x 00:18:31.119 ************************************ 00:18:31.119 START TEST nvmf_nmic 00:18:31.119 ************************************ 00:18:31.119 11:56:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:31.119 * Looking for test storage... 00:18:31.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.119 11:56:24 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.119 11:56:24 -- nvmf/common.sh@7 -- # uname -s 00:18:31.119 11:56:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.119 11:56:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.119 11:56:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.119 11:56:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.119 11:56:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.119 11:56:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.119 11:56:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.119 11:56:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.119 11:56:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.119 11:56:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.119 11:56:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.119 11:56:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.119 11:56:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.119 11:56:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.119 11:56:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.119 11:56:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.119 11:56:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.119 11:56:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.119 11:56:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.119 11:56:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.119 11:56:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.119 11:56:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.119 11:56:24 -- paths/export.sh@5 -- # export PATH 00:18:31.119 11:56:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.119 11:56:24 -- nvmf/common.sh@46 -- # : 0 00:18:31.119 11:56:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:31.119 11:56:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:31.119 11:56:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:31.119 11:56:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.119 11:56:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.119 11:56:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:31.119 11:56:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:31.119 11:56:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:31.119 11:56:24 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.119 11:56:24 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.119 11:56:24 -- target/nmic.sh@14 -- # nvmftestinit 00:18:31.119 11:56:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:31.119 11:56:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.119 11:56:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:31.119 11:56:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:31.119 11:56:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:31.119 11:56:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.119 11:56:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.119 11:56:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.119 11:56:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:31.119 11:56:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:31.119 11:56:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:31.119 11:56:24 -- common/autotest_common.sh@10 -- # set +x 00:18:39.272 11:56:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:39.272 11:56:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:39.272 11:56:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:39.272 11:56:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:39.272 11:56:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:39.272 11:56:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:39.272 11:56:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:39.272 11:56:31 -- nvmf/common.sh@294 -- # net_devs=() 00:18:39.272 11:56:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:39.272 11:56:31 -- nvmf/common.sh@295 -- # e810=() 00:18:39.272 11:56:31 -- nvmf/common.sh@295 -- # local -ga e810 00:18:39.272 11:56:31 -- nvmf/common.sh@296 -- # x722=() 00:18:39.272 11:56:31 -- nvmf/common.sh@296 -- # local -ga x722 00:18:39.273 11:56:31 -- nvmf/common.sh@297 -- # mlx=() 00:18:39.273 11:56:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:39.273 11:56:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.273 11:56:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:39.273 11:56:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:39.273 11:56:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:39.273 11:56:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:39.273 11:56:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:39.273 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:39.273 11:56:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:39.273 11:56:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:39.273 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:39.273 11:56:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:39.273 11:56:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:39.273 11:56:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.273 11:56:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:39.273 11:56:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.273 11:56:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:39.273 Found net devices under 0000:31:00.0: cvl_0_0 00:18:39.273 11:56:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.273 11:56:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:39.273 11:56:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.273 11:56:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:39.273 11:56:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.273 11:56:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:39.273 Found net devices under 0000:31:00.1: cvl_0_1 00:18:39.273 11:56:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.273 11:56:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:39.273 11:56:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:39.273 11:56:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:39.273 11:56:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.273 11:56:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.273 11:56:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.273 11:56:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:39.273 11:56:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.273 11:56:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.273 11:56:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:39.273 11:56:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.273 11:56:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.273 11:56:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:39.273 11:56:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:39.273 11:56:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.273 11:56:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.273 11:56:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.273 11:56:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.273 11:56:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:39.273 11:56:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.273 11:56:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.273 11:56:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.273 11:56:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:39.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:18:39.273 00:18:39.273 --- 10.0.0.2 ping statistics --- 00:18:39.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.273 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:18:39.273 11:56:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:18:39.273 00:18:39.273 --- 10.0.0.1 ping statistics --- 00:18:39.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.273 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:18:39.273 11:56:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.273 11:56:31 -- nvmf/common.sh@410 -- # return 0 00:18:39.273 11:56:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:39.273 11:56:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.273 11:56:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:39.273 11:56:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.273 11:56:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:39.273 11:56:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:39.273 11:56:31 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:39.273 11:56:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:39.273 11:56:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:39.273 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 11:56:31 -- nvmf/common.sh@469 -- # nvmfpid=1944508 00:18:39.273 11:56:31 -- nvmf/common.sh@470 -- # waitforlisten 1944508 00:18:39.273 11:56:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:39.273 11:56:31 -- common/autotest_common.sh@819 -- # '[' -z 1944508 ']' 00:18:39.273 11:56:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.273 11:56:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:39.273 11:56:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.273 11:56:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:39.273 11:56:31 -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 [2024-06-10 11:56:31.961320] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:39.273 [2024-06-10 11:56:31.961368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.273 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.273 [2024-06-10 11:56:32.030103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.273 [2024-06-10 11:56:32.094041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:39.273 [2024-06-10 11:56:32.094175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.273 [2024-06-10 11:56:32.094185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.273 [2024-06-10 11:56:32.094193] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.273 [2024-06-10 11:56:32.094280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.273 [2024-06-10 11:56:32.094362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.273 [2024-06-10 11:56:32.094658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.273 [2024-06-10 11:56:32.094658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.273 11:56:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:39.273 11:56:32 -- common/autotest_common.sh@852 -- # return 0 00:18:39.273 11:56:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:39.273 11:56:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:39.273 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 11:56:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.273 11:56:32 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.273 11:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.273 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 [2024-06-10 11:56:32.830621] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.273 11:56:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.273 11:56:32 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.273 11:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.273 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 Malloc0 00:18:39.273 11:56:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.273 11:56:32 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:39.273 11:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.273 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.274 11:56:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.274 11:56:32 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.274 11:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.274 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.274 11:56:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.274 11:56:32 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.274 11:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.274 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.274 [2024-06-10 11:56:32.873991] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.274 11:56:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.274 11:56:32 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:39.274 test case1: single bdev can't be used in multiple subsystems 00:18:39.274 11:56:32 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:39.274 11:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.274 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.274 11:56:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.274 11:56:32 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:39.274 11:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.274 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.274 11:56:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.274 11:56:32 -- target/nmic.sh@28 -- # nmic_status=0 00:18:39.274 11:56:32 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:39.274 11:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.274 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.274 [2024-06-10 11:56:32.897912] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:39.274 [2024-06-10 11:56:32.897933] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:39.274 [2024-06-10 11:56:32.897940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.274 request: 00:18:39.274 { 00:18:39.274 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:39.274 "namespace": { 00:18:39.274 "bdev_name": "Malloc0" 00:18:39.274 }, 00:18:39.274 "method": "nvmf_subsystem_add_ns", 00:18:39.274 "req_id": 1 00:18:39.274 } 00:18:39.274 Got JSON-RPC error response 00:18:39.274 response: 00:18:39.274 { 00:18:39.274 "code": -32602, 00:18:39.274 "message": "Invalid parameters" 00:18:39.274 } 00:18:39.274 11:56:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:18:39.274 11:56:32 -- target/nmic.sh@29 -- # nmic_status=1 00:18:39.274 11:56:32 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:39.274 11:56:32 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:39.274 Adding namespace failed - expected result. 00:18:39.274 11:56:32 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:39.274 test case2: host connect to nvmf target in multiple paths 00:18:39.274 11:56:32 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:39.274 11:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.274 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.274 [2024-06-10 11:56:32.910054] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:39.274 11:56:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.274 11:56:32 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.188 11:56:34 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:42.573 11:56:35 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:42.573 11:56:35 -- common/autotest_common.sh@1177 -- # local i=0 00:18:42.573 11:56:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.573 11:56:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:42.573 11:56:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:44.488 11:56:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:44.488 11:56:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:44.488 11:56:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:44.488 11:56:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:44.488 11:56:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:44.488 11:56:37 -- common/autotest_common.sh@1187 -- # return 0 00:18:44.488 11:56:37 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:44.488 [global] 00:18:44.488 thread=1 00:18:44.488 invalidate=1 00:18:44.488 rw=write 00:18:44.488 time_based=1 00:18:44.488 runtime=1 00:18:44.488 ioengine=libaio 00:18:44.488 direct=1 00:18:44.488 bs=4096 00:18:44.488 iodepth=1 00:18:44.488 norandommap=0 00:18:44.488 numjobs=1 00:18:44.488 00:18:44.488 verify_dump=1 00:18:44.488 verify_backlog=512 00:18:44.488 verify_state_save=0 00:18:44.488 do_verify=1 00:18:44.488 verify=crc32c-intel 00:18:44.488 [job0] 00:18:44.488 filename=/dev/nvme0n1 00:18:44.488 Could not set queue depth (nvme0n1) 00:18:44.749 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.749 fio-3.35 00:18:44.749 Starting 1 thread 00:18:46.136 00:18:46.136 job0: (groupid=0, jobs=1): err= 0: pid=1945843: Mon Jun 10 11:56:39 2024 00:18:46.136 read: IOPS=14, BW=59.4KiB/s (60.8kB/s)(60.0KiB/1010msec) 00:18:46.136 slat (nsec): min=9358, max=25844, avg=24302.87, stdev=4139.99 00:18:46.136 clat (usec): min=979, max=42972, avg=39478.16, stdev=10658.69 00:18:46.136 lat (usec): min=988, max=42998, avg=39502.46, stdev=10662.82 00:18:46.136 clat percentiles (usec): 00:18:46.136 | 1.00th=[ 979], 5.00th=[ 979], 10.00th=[41681], 20.00th=[42206], 00:18:46.136 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:46.136 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:18:46.136 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:46.136 | 99.99th=[42730] 00:18:46.136 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:18:46.136 slat (usec): min=9, max=26704, avg=81.99, stdev=1178.88 00:18:46.136 clat (usec): min=368, max=1076, avg=722.20, stdev=98.54 00:18:46.136 lat (usec): min=378, max=27471, avg=804.19, stdev=1185.30 00:18:46.136 clat percentiles (usec): 00:18:46.136 | 1.00th=[ 453], 5.00th=[ 545], 10.00th=[ 594], 20.00th=[ 644], 00:18:46.136 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 758], 00:18:46.136 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 832], 95.00th=[ 857], 00:18:46.136 | 99.00th=[ 922], 99.50th=[ 996], 99.90th=[ 1074], 99.95th=[ 1074], 00:18:46.136 | 99.99th=[ 1074] 00:18:46.136 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:46.136 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:46.136 lat (usec) : 500=2.47%, 750=53.89%, 1000=40.61% 00:18:46.136 lat (msec) : 2=0.38%, 50=2.66% 00:18:46.136 cpu : usr=0.59%, sys=1.59%, ctx=531, majf=0, minf=1 00:18:46.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.136 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:46.136 00:18:46.136 Run status group 0 (all jobs): 00:18:46.136 READ: bw=59.4KiB/s (60.8kB/s), 59.4KiB/s-59.4KiB/s (60.8kB/s-60.8kB/s), io=60.0KiB (61.4kB), run=1010-1010msec 00:18:46.136 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:18:46.136 00:18:46.136 Disk stats (read/write): 00:18:46.136 nvme0n1: ios=37/512, merge=0/0, ticks=1429/354, in_queue=1783, util=98.70% 00:18:46.136 11:56:39 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:46.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:46.136 11:56:39 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:46.136 11:56:39 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.136 11:56:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.136 11:56:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.136 11:56:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.136 11:56:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.136 11:56:39 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.136 11:56:39 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:46.136 11:56:39 -- target/nmic.sh@53 -- # nvmftestfini 00:18:46.136 11:56:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:46.136 11:56:39 -- nvmf/common.sh@116 -- # sync 00:18:46.136 11:56:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:46.136 11:56:39 -- nvmf/common.sh@119 -- # set +e 00:18:46.136 11:56:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:46.136 11:56:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:46.136 rmmod nvme_tcp 00:18:46.136 rmmod nvme_fabrics 00:18:46.136 rmmod nvme_keyring 00:18:46.136 11:56:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:46.136 11:56:39 -- nvmf/common.sh@123 -- # set -e 00:18:46.136 11:56:39 -- nvmf/common.sh@124 -- # return 0 00:18:46.136 11:56:39 -- nvmf/common.sh@477 -- # '[' -n 1944508 ']' 00:18:46.136 11:56:39 -- nvmf/common.sh@478 -- # killprocess 1944508 00:18:46.136 11:56:39 -- common/autotest_common.sh@926 -- # '[' -z 1944508 ']' 00:18:46.136 11:56:39 -- common/autotest_common.sh@930 -- # kill -0 1944508 00:18:46.136 11:56:39 -- common/autotest_common.sh@931 -- # uname 00:18:46.136 11:56:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:46.136 11:56:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1944508 00:18:46.136 11:56:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:46.136 11:56:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:46.136 11:56:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1944508' 00:18:46.136 killing process with pid 1944508 00:18:46.136 11:56:39 -- common/autotest_common.sh@945 -- # kill 1944508 00:18:46.136 11:56:39 -- common/autotest_common.sh@950 -- # wait 1944508 00:18:46.136 11:56:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:46.136 11:56:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:46.136 11:56:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:46.136 11:56:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.136 11:56:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:46.136 11:56:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.136 11:56:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.136 11:56:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.685 11:56:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:48.685 00:18:48.685 real 0m17.347s 00:18:48.685 user 0m44.913s 00:18:48.685 sys 0m5.941s 00:18:48.685 11:56:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.685 11:56:41 -- common/autotest_common.sh@10 -- # set +x 00:18:48.685 ************************************ 00:18:48.685 END TEST nvmf_nmic 00:18:48.685 ************************************ 00:18:48.685 11:56:42 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:48.685 11:56:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:48.685 11:56:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:48.685 11:56:42 -- common/autotest_common.sh@10 -- # set +x 00:18:48.685 ************************************ 00:18:48.685 START TEST nvmf_fio_target 00:18:48.685 ************************************ 00:18:48.685 11:56:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:48.685 * Looking for test storage... 00:18:48.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.685 11:56:42 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.685 11:56:42 -- nvmf/common.sh@7 -- # uname -s 00:18:48.685 11:56:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.685 11:56:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.685 11:56:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.685 11:56:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.685 11:56:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.685 11:56:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.685 11:56:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.685 11:56:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.685 11:56:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.685 11:56:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.685 11:56:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.685 11:56:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.685 11:56:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.685 11:56:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.685 11:56:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.685 11:56:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.685 11:56:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.685 11:56:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.685 11:56:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.685 11:56:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.685 11:56:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.685 11:56:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.685 11:56:42 -- paths/export.sh@5 -- # export PATH 00:18:48.685 11:56:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.685 11:56:42 -- nvmf/common.sh@46 -- # : 0 00:18:48.685 11:56:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:48.685 11:56:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:48.685 11:56:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:48.685 11:56:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.685 11:56:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.685 11:56:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:48.685 11:56:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:48.685 11:56:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:48.685 11:56:42 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.685 11:56:42 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.685 11:56:42 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:48.685 11:56:42 -- target/fio.sh@16 -- # nvmftestinit 00:18:48.685 11:56:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:48.685 11:56:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.685 11:56:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:48.685 11:56:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:48.685 11:56:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:48.685 11:56:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.685 11:56:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.685 11:56:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.685 11:56:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:48.685 11:56:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:48.685 11:56:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:48.685 11:56:42 -- common/autotest_common.sh@10 -- # set +x 00:18:56.939 11:56:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:56.939 11:56:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:56.939 11:56:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:56.939 11:56:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:56.939 11:56:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:56.939 11:56:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:56.939 11:56:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:56.939 11:56:49 -- nvmf/common.sh@294 -- # net_devs=() 00:18:56.939 11:56:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:56.939 11:56:49 -- nvmf/common.sh@295 -- # e810=() 00:18:56.939 11:56:49 -- nvmf/common.sh@295 -- # local -ga e810 00:18:56.939 11:56:49 -- nvmf/common.sh@296 -- # x722=() 00:18:56.939 11:56:49 -- nvmf/common.sh@296 -- # local -ga x722 00:18:56.939 11:56:49 -- nvmf/common.sh@297 -- # mlx=() 00:18:56.939 11:56:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:56.939 11:56:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.939 11:56:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.939 11:56:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.939 11:56:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.939 11:56:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.939 11:56:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.939 11:56:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.939 11:56:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.939 11:56:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.940 11:56:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.940 11:56:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.940 11:56:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:56.940 11:56:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:56.940 11:56:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:56.940 11:56:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:56.940 11:56:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:56.940 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:56.940 11:56:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:56.940 11:56:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:56.940 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:56.940 11:56:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:56.940 11:56:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:56.940 11:56:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.940 11:56:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:56.940 11:56:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.940 11:56:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:56.940 Found net devices under 0000:31:00.0: cvl_0_0 00:18:56.940 11:56:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.940 11:56:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:56.940 11:56:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.940 11:56:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:56.940 11:56:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.940 11:56:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:56.940 Found net devices under 0000:31:00.1: cvl_0_1 00:18:56.940 11:56:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.940 11:56:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:56.940 11:56:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:56.940 11:56:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:56.940 11:56:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.940 11:56:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.940 11:56:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.940 11:56:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:56.940 11:56:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.940 11:56:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.940 11:56:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:56.940 11:56:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.940 11:56:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.940 11:56:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:56.940 11:56:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:56.940 11:56:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.940 11:56:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.940 11:56:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.940 11:56:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.940 11:56:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:56.940 11:56:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.940 11:56:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.940 11:56:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.940 11:56:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:56.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:18:56.940 00:18:56.940 --- 10.0.0.2 ping statistics --- 00:18:56.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.940 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:18:56.940 11:56:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:18:56.940 00:18:56.940 --- 10.0.0.1 ping statistics --- 00:18:56.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.940 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:18:56.940 11:56:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.940 11:56:49 -- nvmf/common.sh@410 -- # return 0 00:18:56.940 11:56:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:56.940 11:56:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.940 11:56:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:56.940 11:56:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.940 11:56:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:56.940 11:56:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:56.940 11:56:49 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:56.940 11:56:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:56.940 11:56:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:56.940 11:56:49 -- common/autotest_common.sh@10 -- # set +x 00:18:56.940 11:56:49 -- nvmf/common.sh@469 -- # nvmfpid=1950487 00:18:56.940 11:56:49 -- nvmf/common.sh@470 -- # waitforlisten 1950487 00:18:56.940 11:56:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:56.940 11:56:49 -- common/autotest_common.sh@819 -- # '[' -z 1950487 ']' 00:18:56.940 11:56:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.940 11:56:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:56.940 11:56:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.940 11:56:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:56.940 11:56:49 -- common/autotest_common.sh@10 -- # set +x 00:18:56.940 [2024-06-10 11:56:49.547909] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:56.940 [2024-06-10 11:56:49.547996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.940 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.940 [2024-06-10 11:56:49.623985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.940 [2024-06-10 11:56:49.697568] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:56.940 [2024-06-10 11:56:49.697704] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.940 [2024-06-10 11:56:49.697713] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.940 [2024-06-10 11:56:49.697721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.940 [2024-06-10 11:56:49.697886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.940 [2024-06-10 11:56:49.698006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.940 [2024-06-10 11:56:49.698168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.940 [2024-06-10 11:56:49.698168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.940 11:56:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:56.940 11:56:50 -- common/autotest_common.sh@852 -- # return 0 00:18:56.940 11:56:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:56.940 11:56:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:56.940 11:56:50 -- common/autotest_common.sh@10 -- # set +x 00:18:56.940 11:56:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.940 11:56:50 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:56.940 [2024-06-10 11:56:50.498804] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.940 11:56:50 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.202 11:56:50 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:57.202 11:56:50 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.202 11:56:50 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:57.202 11:56:50 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.462 11:56:51 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:57.462 11:56:51 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.722 11:56:51 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:57.722 11:56:51 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:57.722 11:56:51 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.983 11:56:51 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:57.983 11:56:51 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.983 11:56:51 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:57.983 11:56:51 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.243 11:56:51 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:58.243 11:56:51 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:58.526 11:56:52 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:58.526 11:56:52 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:58.526 11:56:52 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.788 11:56:52 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:58.788 11:56:52 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:59.048 11:56:52 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.048 [2024-06-10 11:56:52.696873] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.048 11:56:52 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:59.308 11:56:52 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:59.308 11:56:53 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:01.210 11:56:54 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:01.210 11:56:54 -- common/autotest_common.sh@1177 -- # local i=0 00:19:01.210 11:56:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.210 11:56:54 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:01.210 11:56:54 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:01.210 11:56:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:03.120 11:56:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:03.120 11:56:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:03.120 11:56:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:03.120 11:56:56 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:03.120 11:56:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.120 11:56:56 -- common/autotest_common.sh@1187 -- # return 0 00:19:03.120 11:56:56 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:03.120 [global] 00:19:03.120 thread=1 00:19:03.120 invalidate=1 00:19:03.120 rw=write 00:19:03.120 time_based=1 00:19:03.120 runtime=1 00:19:03.120 ioengine=libaio 00:19:03.120 direct=1 00:19:03.120 bs=4096 00:19:03.120 iodepth=1 00:19:03.120 norandommap=0 00:19:03.120 numjobs=1 00:19:03.120 00:19:03.120 verify_dump=1 00:19:03.120 verify_backlog=512 00:19:03.120 verify_state_save=0 00:19:03.120 do_verify=1 00:19:03.120 verify=crc32c-intel 00:19:03.120 [job0] 00:19:03.120 filename=/dev/nvme0n1 00:19:03.120 [job1] 00:19:03.120 filename=/dev/nvme0n2 00:19:03.120 [job2] 00:19:03.120 filename=/dev/nvme0n3 00:19:03.120 [job3] 00:19:03.120 filename=/dev/nvme0n4 00:19:03.120 Could not set queue depth (nvme0n1) 00:19:03.120 Could not set queue depth (nvme0n2) 00:19:03.120 Could not set queue depth (nvme0n3) 00:19:03.120 Could not set queue depth (nvme0n4) 00:19:03.380 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.381 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.381 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.381 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.381 fio-3.35 00:19:03.381 Starting 4 threads 00:19:04.765 00:19:04.765 job0: (groupid=0, jobs=1): err= 0: pid=1952098: Mon Jun 10 11:56:58 2024 00:19:04.765 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:04.765 slat (nsec): min=6310, max=63066, avg=24128.08, stdev=8148.19 00:19:04.765 clat (usec): min=455, max=2132, avg=732.10, stdev=169.24 00:19:04.765 lat (usec): min=468, max=2178, avg=756.23, stdev=171.67 00:19:04.765 clat percentiles (usec): 00:19:04.765 | 1.00th=[ 474], 5.00th=[ 529], 10.00th=[ 570], 20.00th=[ 611], 00:19:04.765 | 30.00th=[ 660], 40.00th=[ 693], 50.00th=[ 734], 60.00th=[ 766], 00:19:04.765 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 881], 00:19:04.765 | 99.00th=[ 1565], 99.50th=[ 1811], 99.90th=[ 2147], 99.95th=[ 2147], 00:19:04.765 | 99.99th=[ 2147] 00:19:04.765 write: IOPS=1015, BW=4064KiB/s (4161kB/s)(4068KiB/1001msec); 0 zone resets 00:19:04.765 slat (usec): min=8, max=2083, avg=33.82, stdev=65.19 00:19:04.765 clat (usec): min=173, max=1549, avg=558.54, stdev=165.88 00:19:04.765 lat (usec): min=183, max=2860, avg=592.36, stdev=183.77 00:19:04.765 clat percentiles (usec): 00:19:04.765 | 1.00th=[ 255], 5.00th=[ 297], 10.00th=[ 355], 20.00th=[ 396], 00:19:04.765 | 30.00th=[ 465], 40.00th=[ 506], 50.00th=[ 553], 60.00th=[ 586], 00:19:04.765 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 791], 95.00th=[ 840], 00:19:04.765 | 99.00th=[ 930], 99.50th=[ 971], 99.90th=[ 1037], 99.95th=[ 1549], 00:19:04.765 | 99.99th=[ 1549] 00:19:04.765 bw ( KiB/s): min= 4096, max= 4096, per=33.42%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.765 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.765 lat (usec) : 250=0.46%, 500=26.16%, 750=49.38%, 1000=23.28% 00:19:04.765 lat (msec) : 2=0.59%, 4=0.13% 00:19:04.765 cpu : usr=2.70%, sys=6.20%, ctx=1533, majf=0, minf=1 00:19:04.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.765 issued rwts: total=512,1017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.765 job1: (groupid=0, jobs=1): err= 0: pid=1952099: Mon Jun 10 11:56:58 2024 00:19:04.765 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:04.765 slat (nsec): min=6436, max=56622, avg=20949.18, stdev=8555.68 00:19:04.765 clat (usec): min=213, max=692, avg=540.80, stdev=59.59 00:19:04.765 lat (usec): min=238, max=717, avg=561.75, stdev=61.04 00:19:04.765 clat percentiles (usec): 00:19:04.765 | 1.00th=[ 392], 5.00th=[ 437], 10.00th=[ 453], 20.00th=[ 486], 00:19:04.765 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 562], 00:19:04.765 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 627], 00:19:04.765 | 99.00th=[ 660], 99.50th=[ 668], 99.90th=[ 676], 99.95th=[ 693], 00:19:04.765 | 99.99th=[ 693] 00:19:04.765 write: IOPS=1082, BW=4332KiB/s (4436kB/s)(4336KiB/1001msec); 0 zone resets 00:19:04.765 slat (nsec): min=9390, max=73965, avg=26997.92, stdev=9967.46 00:19:04.765 clat (usec): min=119, max=1278, avg=351.08, stdev=77.12 00:19:04.765 lat (usec): min=129, max=1288, avg=378.07, stdev=79.88 00:19:04.765 clat percentiles (usec): 00:19:04.765 | 1.00th=[ 153], 5.00th=[ 237], 10.00th=[ 258], 20.00th=[ 281], 00:19:04.765 | 30.00th=[ 318], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 375], 00:19:04.765 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 453], 00:19:04.765 | 99.00th=[ 519], 99.50th=[ 523], 99.90th=[ 848], 99.95th=[ 1287], 00:19:04.766 | 99.99th=[ 1287] 00:19:04.766 bw ( KiB/s): min= 4096, max= 4096, per=33.42%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.766 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.766 lat (usec) : 250=4.41%, 500=58.16%, 750=37.33%, 1000=0.05% 00:19:04.766 lat (msec) : 2=0.05% 00:19:04.766 cpu : usr=2.60%, sys=5.60%, ctx=2109, majf=0, minf=1 00:19:04.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.766 issued rwts: total=1024,1084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.766 job2: (groupid=0, jobs=1): err= 0: pid=1952100: Mon Jun 10 11:56:58 2024 00:19:04.766 read: IOPS=15, BW=62.7KiB/s (64.2kB/s)(64.0KiB/1020msec) 00:19:04.766 slat (nsec): min=24750, max=25396, avg=25065.19, stdev=202.56 00:19:04.766 clat (usec): min=1293, max=42938, avg=39626.10, stdev=10228.80 00:19:04.766 lat (usec): min=1317, max=42964, avg=39651.16, stdev=10228.88 00:19:04.766 clat percentiles (usec): 00:19:04.766 | 1.00th=[ 1287], 5.00th=[ 1287], 10.00th=[41681], 20.00th=[41681], 00:19:04.766 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:04.766 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:19:04.766 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:04.766 | 99.99th=[42730] 00:19:04.766 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:19:04.766 slat (nsec): min=9648, max=51362, avg=29801.20, stdev=8382.84 00:19:04.766 clat (usec): min=361, max=983, avg=716.19, stdev=115.78 00:19:04.766 lat (usec): min=370, max=1015, avg=746.00, stdev=119.09 00:19:04.766 clat percentiles (usec): 00:19:04.766 | 1.00th=[ 445], 5.00th=[ 498], 10.00th=[ 562], 20.00th=[ 611], 00:19:04.766 | 30.00th=[ 660], 40.00th=[ 701], 50.00th=[ 717], 60.00th=[ 750], 00:19:04.766 | 70.00th=[ 791], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 889], 00:19:04.766 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 988], 99.95th=[ 988], 00:19:04.766 | 99.99th=[ 988] 00:19:04.766 bw ( KiB/s): min= 4096, max= 4096, per=33.42%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.766 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.766 lat (usec) : 500=5.30%, 750=52.08%, 1000=39.58% 00:19:04.766 lat (msec) : 2=0.19%, 50=2.84% 00:19:04.766 cpu : usr=0.79%, sys=1.37%, ctx=529, majf=0, minf=1 00:19:04.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.766 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.766 job3: (groupid=0, jobs=1): err= 0: pid=1952101: Mon Jun 10 11:56:58 2024 00:19:04.766 read: IOPS=14, BW=59.1KiB/s (60.5kB/s)(60.0KiB/1016msec) 00:19:04.766 slat (nsec): min=24257, max=25067, avg=24550.20, stdev=251.32 00:19:04.766 clat (usec): min=41930, max=43027, avg=42474.27, stdev=487.98 00:19:04.766 lat (usec): min=41955, max=43052, avg=42498.82, stdev=487.90 00:19:04.766 clat percentiles (usec): 00:19:04.766 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:19:04.766 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42730], 60.00th=[42730], 00:19:04.766 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:19:04.766 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:04.766 | 99.99th=[43254] 00:19:04.766 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:19:04.766 slat (nsec): min=9569, max=80226, avg=29114.41, stdev=7846.60 00:19:04.766 clat (usec): min=331, max=1229, avg=703.05, stdev=166.13 00:19:04.766 lat (usec): min=342, max=1260, avg=732.16, stdev=168.40 00:19:04.766 clat percentiles (usec): 00:19:04.766 | 1.00th=[ 359], 5.00th=[ 461], 10.00th=[ 494], 20.00th=[ 570], 00:19:04.766 | 30.00th=[ 611], 40.00th=[ 652], 50.00th=[ 693], 60.00th=[ 734], 00:19:04.766 | 70.00th=[ 783], 80.00th=[ 840], 90.00th=[ 914], 95.00th=[ 1020], 00:19:04.766 | 99.00th=[ 1139], 99.50th=[ 1188], 99.90th=[ 1237], 99.95th=[ 1237], 00:19:04.766 | 99.99th=[ 1237] 00:19:04.766 bw ( KiB/s): min= 4096, max= 4096, per=33.42%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.766 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.766 lat (usec) : 500=10.44%, 750=51.99%, 1000=29.41% 00:19:04.766 lat (msec) : 2=5.31%, 50=2.85% 00:19:04.766 cpu : usr=0.79%, sys=1.38%, ctx=528, majf=0, minf=1 00:19:04.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.766 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.766 00:19:04.766 Run status group 0 (all jobs): 00:19:04.766 READ: bw=6145KiB/s (6293kB/s), 59.1KiB/s-4092KiB/s (60.5kB/s-4190kB/s), io=6268KiB (6418kB), run=1001-1020msec 00:19:04.766 WRITE: bw=12.0MiB/s (12.5MB/s), 2008KiB/s-4332KiB/s (2056kB/s-4436kB/s), io=12.2MiB (12.8MB), run=1001-1020msec 00:19:04.766 00:19:04.766 Disk stats (read/write): 00:19:04.766 nvme0n1: ios=565/573, merge=0/0, ticks=608/298, in_queue=906, util=98.80% 00:19:04.766 nvme0n2: ios=674/1024, merge=0/0, ticks=392/349, in_queue=741, util=81.60% 00:19:04.766 nvme0n3: ios=15/512, merge=0/0, ticks=592/340, in_queue=932, util=89.24% 00:19:04.766 nvme0n4: ios=9/512, merge=0/0, ticks=383/349, in_queue=732, util=88.62% 00:19:04.766 11:56:58 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:04.766 [global] 00:19:04.766 thread=1 00:19:04.766 invalidate=1 00:19:04.766 rw=randwrite 00:19:04.766 time_based=1 00:19:04.766 runtime=1 00:19:04.766 ioengine=libaio 00:19:04.766 direct=1 00:19:04.766 bs=4096 00:19:04.766 iodepth=1 00:19:04.766 norandommap=0 00:19:04.766 numjobs=1 00:19:04.766 00:19:04.766 verify_dump=1 00:19:04.766 verify_backlog=512 00:19:04.766 verify_state_save=0 00:19:04.766 do_verify=1 00:19:04.766 verify=crc32c-intel 00:19:04.766 [job0] 00:19:04.766 filename=/dev/nvme0n1 00:19:04.766 [job1] 00:19:04.766 filename=/dev/nvme0n2 00:19:04.766 [job2] 00:19:04.766 filename=/dev/nvme0n3 00:19:04.766 [job3] 00:19:04.766 filename=/dev/nvme0n4 00:19:04.766 Could not set queue depth (nvme0n1) 00:19:04.766 Could not set queue depth (nvme0n2) 00:19:04.766 Could not set queue depth (nvme0n3) 00:19:04.766 Could not set queue depth (nvme0n4) 00:19:05.027 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.027 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.027 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.027 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.027 fio-3.35 00:19:05.027 Starting 4 threads 00:19:06.413 00:19:06.413 job0: (groupid=0, jobs=1): err= 0: pid=1952629: Mon Jun 10 11:56:59 2024 00:19:06.413 read: IOPS=426, BW=1705KiB/s (1746kB/s)(1708KiB/1002msec) 00:19:06.413 slat (nsec): min=7309, max=58472, avg=24696.21, stdev=3296.75 00:19:06.413 clat (usec): min=538, max=42229, avg=1574.42, stdev=4108.43 00:19:06.413 lat (usec): min=562, max=42254, avg=1599.12, stdev=4108.40 00:19:06.413 clat percentiles (usec): 00:19:06.413 | 1.00th=[ 627], 5.00th=[ 750], 10.00th=[ 922], 20.00th=[ 996], 00:19:06.413 | 30.00th=[ 1090], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1221], 00:19:06.413 | 70.00th=[ 1237], 80.00th=[ 1254], 90.00th=[ 1287], 95.00th=[ 1319], 00:19:06.413 | 99.00th=[25035], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.413 | 99.99th=[42206] 00:19:06.413 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:19:06.413 slat (nsec): min=8887, max=90978, avg=26765.65, stdev=8743.15 00:19:06.413 clat (usec): min=236, max=840, avg=580.07, stdev=113.31 00:19:06.413 lat (usec): min=246, max=870, avg=606.83, stdev=116.59 00:19:06.413 clat percentiles (usec): 00:19:06.413 | 1.00th=[ 310], 5.00th=[ 347], 10.00th=[ 433], 20.00th=[ 478], 00:19:06.413 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:19:06.413 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 717], 95.00th=[ 750], 00:19:06.413 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 840], 99.95th=[ 840], 00:19:06.413 | 99.99th=[ 840] 00:19:06.413 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.413 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.413 lat (usec) : 250=0.11%, 500=12.25%, 750=41.64%, 1000=9.69% 00:19:06.413 lat (msec) : 2=35.78%, 50=0.53% 00:19:06.413 cpu : usr=1.70%, sys=2.20%, ctx=940, majf=0, minf=1 00:19:06.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.414 issued rwts: total=427,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.414 job1: (groupid=0, jobs=1): err= 0: pid=1952630: Mon Jun 10 11:56:59 2024 00:19:06.414 read: IOPS=19, BW=77.7KiB/s (79.6kB/s)(80.0KiB/1029msec) 00:19:06.414 slat (nsec): min=23943, max=29348, avg=26283.65, stdev=1107.38 00:19:06.414 clat (usec): min=884, max=42908, avg=35870.95, stdev=15050.75 00:19:06.414 lat (usec): min=912, max=42932, avg=35897.23, stdev=15050.13 00:19:06.414 clat percentiles (usec): 00:19:06.414 | 1.00th=[ 889], 5.00th=[ 889], 10.00th=[ 914], 20.00th=[40633], 00:19:06.414 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:06.414 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:19:06.414 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:06.414 | 99.99th=[42730] 00:19:06.414 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:19:06.414 slat (nsec): min=8599, max=50522, avg=29884.45, stdev=8016.40 00:19:06.414 clat (usec): min=169, max=859, avg=568.90, stdev=109.95 00:19:06.414 lat (usec): min=177, max=894, avg=598.78, stdev=113.04 00:19:06.414 clat percentiles (usec): 00:19:06.414 | 1.00th=[ 289], 5.00th=[ 383], 10.00th=[ 420], 20.00th=[ 469], 00:19:06.414 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:19:06.414 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 734], 00:19:06.414 | 99.00th=[ 807], 99.50th=[ 816], 99.90th=[ 857], 99.95th=[ 857], 00:19:06.414 | 99.99th=[ 857] 00:19:06.414 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.414 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.414 lat (usec) : 250=0.38%, 500=24.62%, 750=67.86%, 1000=3.76% 00:19:06.414 lat (msec) : 2=0.19%, 50=3.20% 00:19:06.414 cpu : usr=0.97%, sys=2.04%, ctx=532, majf=0, minf=1 00:19:06.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.414 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.414 job2: (groupid=0, jobs=1): err= 0: pid=1952631: Mon Jun 10 11:56:59 2024 00:19:06.414 read: IOPS=532, BW=2130KiB/s (2181kB/s)(2132KiB/1001msec) 00:19:06.414 slat (nsec): min=6504, max=66131, avg=25440.44, stdev=6592.92 00:19:06.414 clat (usec): min=257, max=1015, avg=762.74, stdev=127.47 00:19:06.414 lat (usec): min=284, max=1042, avg=788.18, stdev=128.30 00:19:06.414 clat percentiles (usec): 00:19:06.414 | 1.00th=[ 453], 5.00th=[ 545], 10.00th=[ 570], 20.00th=[ 652], 00:19:06.414 | 30.00th=[ 693], 40.00th=[ 750], 50.00th=[ 783], 60.00th=[ 824], 00:19:06.414 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 938], 00:19:06.414 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1020], 99.95th=[ 1020], 00:19:06.414 | 99.99th=[ 1020] 00:19:06.414 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:06.414 slat (nsec): min=9155, max=68365, avg=31300.56, stdev=8460.93 00:19:06.414 clat (usec): min=153, max=861, avg=523.10, stdev=113.57 00:19:06.414 lat (usec): min=163, max=895, avg=554.40, stdev=116.36 00:19:06.414 clat percentiles (usec): 00:19:06.414 | 1.00th=[ 235], 5.00th=[ 326], 10.00th=[ 379], 20.00th=[ 424], 00:19:06.414 | 30.00th=[ 469], 40.00th=[ 498], 50.00th=[ 529], 60.00th=[ 553], 00:19:06.414 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 701], 00:19:06.414 | 99.00th=[ 766], 99.50th=[ 775], 99.90th=[ 848], 99.95th=[ 865], 00:19:06.414 | 99.99th=[ 865] 00:19:06.414 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.414 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.414 lat (usec) : 250=0.90%, 500=26.08%, 750=51.70%, 1000=21.13% 00:19:06.414 lat (msec) : 2=0.19% 00:19:06.414 cpu : usr=3.40%, sys=5.90%, ctx=1559, majf=0, minf=1 00:19:06.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.414 issued rwts: total=533,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.414 job3: (groupid=0, jobs=1): err= 0: pid=1952632: Mon Jun 10 11:56:59 2024 00:19:06.414 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:06.414 slat (nsec): min=7158, max=60632, avg=25035.49, stdev=4065.81 00:19:06.414 clat (usec): min=826, max=1834, avg=1127.08, stdev=93.48 00:19:06.414 lat (usec): min=835, max=1861, avg=1152.12, stdev=94.25 00:19:06.414 clat percentiles (usec): 00:19:06.414 | 1.00th=[ 881], 5.00th=[ 963], 10.00th=[ 1012], 20.00th=[ 1074], 00:19:06.414 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:19:06.414 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1270], 00:19:06.414 | 99.00th=[ 1336], 99.50th=[ 1401], 99.90th=[ 1827], 99.95th=[ 1827], 00:19:06.414 | 99.99th=[ 1827] 00:19:06.414 write: IOPS=521, BW=2086KiB/s (2136kB/s)(2088KiB/1001msec); 0 zone resets 00:19:06.414 slat (nsec): min=8588, max=61242, avg=29244.54, stdev=8494.34 00:19:06.414 clat (usec): min=346, max=1440, avg=741.07, stdev=123.92 00:19:06.414 lat (usec): min=378, max=1450, avg=770.32, stdev=126.71 00:19:06.414 clat percentiles (usec): 00:19:06.414 | 1.00th=[ 461], 5.00th=[ 537], 10.00th=[ 594], 20.00th=[ 652], 00:19:06.414 | 30.00th=[ 685], 40.00th=[ 709], 50.00th=[ 742], 60.00th=[ 766], 00:19:06.414 | 70.00th=[ 807], 80.00th=[ 840], 90.00th=[ 898], 95.00th=[ 930], 00:19:06.414 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1434], 99.95th=[ 1434], 00:19:06.414 | 99.99th=[ 1434] 00:19:06.414 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.414 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.414 lat (usec) : 500=1.84%, 750=26.02%, 1000=26.21% 00:19:06.414 lat (msec) : 2=45.94% 00:19:06.414 cpu : usr=1.80%, sys=3.70%, ctx=1034, majf=0, minf=1 00:19:06.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.414 issued rwts: total=512,522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.414 00:19:06.414 Run status group 0 (all jobs): 00:19:06.414 READ: bw=5800KiB/s (5939kB/s), 77.7KiB/s-2130KiB/s (79.6kB/s-2181kB/s), io=5968KiB (6111kB), run=1001-1029msec 00:19:06.414 WRITE: bw=9990KiB/s (10.2MB/s), 1990KiB/s-4092KiB/s (2038kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1029msec 00:19:06.414 00:19:06.414 Disk stats (read/write): 00:19:06.414 nvme0n1: ios=346/512, merge=0/0, ticks=565/278, in_queue=843, util=87.98% 00:19:06.414 nvme0n2: ios=51/512, merge=0/0, ticks=548/210, in_queue=758, util=87.67% 00:19:06.414 nvme0n3: ios=547/745, merge=0/0, ticks=1095/283, in_queue=1378, util=98.95% 00:19:06.414 nvme0n4: ios=413/512, merge=0/0, ticks=519/305, in_queue=824, util=92.10% 00:19:06.414 11:56:59 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:06.414 [global] 00:19:06.414 thread=1 00:19:06.414 invalidate=1 00:19:06.414 rw=write 00:19:06.414 time_based=1 00:19:06.414 runtime=1 00:19:06.414 ioengine=libaio 00:19:06.414 direct=1 00:19:06.414 bs=4096 00:19:06.414 iodepth=128 00:19:06.414 norandommap=0 00:19:06.414 numjobs=1 00:19:06.414 00:19:06.414 verify_dump=1 00:19:06.414 verify_backlog=512 00:19:06.414 verify_state_save=0 00:19:06.414 do_verify=1 00:19:06.414 verify=crc32c-intel 00:19:06.414 [job0] 00:19:06.414 filename=/dev/nvme0n1 00:19:06.414 [job1] 00:19:06.414 filename=/dev/nvme0n2 00:19:06.414 [job2] 00:19:06.414 filename=/dev/nvme0n3 00:19:06.414 [job3] 00:19:06.414 filename=/dev/nvme0n4 00:19:06.414 Could not set queue depth (nvme0n1) 00:19:06.414 Could not set queue depth (nvme0n2) 00:19:06.414 Could not set queue depth (nvme0n3) 00:19:06.414 Could not set queue depth (nvme0n4) 00:19:06.674 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.674 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.674 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.675 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.675 fio-3.35 00:19:06.675 Starting 4 threads 00:19:08.059 00:19:08.059 job0: (groupid=0, jobs=1): err= 0: pid=1953174: Mon Jun 10 11:57:01 2024 00:19:08.059 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:19:08.059 slat (nsec): min=902, max=8380.2k, avg=63135.41, stdev=424071.59 00:19:08.059 clat (usec): min=3960, max=23440, avg=8336.41, stdev=2319.11 00:19:08.059 lat (usec): min=3962, max=23450, avg=8399.55, stdev=2346.43 00:19:08.059 clat percentiles (usec): 00:19:08.059 | 1.00th=[ 4817], 5.00th=[ 5604], 10.00th=[ 6456], 20.00th=[ 7046], 00:19:08.059 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 8094], 00:19:08.059 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[11994], 00:19:08.059 | 99.00th=[18482], 99.50th=[20841], 99.90th=[22938], 99.95th=[23462], 00:19:08.059 | 99.99th=[23462] 00:19:08.059 write: IOPS=7208, BW=28.2MiB/s (29.5MB/s)(28.2MiB/1003msec); 0 zone resets 00:19:08.059 slat (nsec): min=1571, max=9896.8k, avg=70023.21, stdev=396450.80 00:19:08.059 clat (usec): min=816, max=27941, avg=9315.92, stdev=4842.28 00:19:08.059 lat (usec): min=826, max=27946, avg=9385.95, stdev=4871.03 00:19:08.059 clat percentiles (usec): 00:19:08.059 | 1.00th=[ 2540], 5.00th=[ 4228], 10.00th=[ 5473], 20.00th=[ 6849], 00:19:08.059 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 8160], 00:19:08.059 | 70.00th=[ 8455], 80.00th=[ 9765], 90.00th=[18220], 95.00th=[20841], 00:19:08.059 | 99.00th=[23987], 99.50th=[26870], 99.90th=[27657], 99.95th=[27919], 00:19:08.059 | 99.99th=[27919] 00:19:08.059 bw ( KiB/s): min=25384, max=31960, per=31.66%, avg=28672.00, stdev=4649.93, samples=2 00:19:08.059 iops : min= 6346, max= 7990, avg=7168.00, stdev=1162.48, samples=2 00:19:08.059 lat (usec) : 1000=0.03% 00:19:08.059 lat (msec) : 2=0.26%, 4=1.79%, 10=79.75%, 20=14.63%, 50=3.54% 00:19:08.059 cpu : usr=5.19%, sys=5.79%, ctx=658, majf=0, minf=1 00:19:08.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:08.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.059 issued rwts: total=7168,7230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.059 job1: (groupid=0, jobs=1): err= 0: pid=1953175: Mon Jun 10 11:57:01 2024 00:19:08.059 read: IOPS=4040, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1003msec) 00:19:08.059 slat (nsec): min=867, max=8735.4k, avg=116616.15, stdev=666841.06 00:19:08.059 clat (usec): min=1401, max=42148, avg=14801.42, stdev=5459.29 00:19:08.059 lat (usec): min=3512, max=42153, avg=14918.03, stdev=5506.00 00:19:08.059 clat percentiles (usec): 00:19:08.059 | 1.00th=[ 7570], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10159], 00:19:08.059 | 30.00th=[11076], 40.00th=[11731], 50.00th=[13304], 60.00th=[14877], 00:19:08.059 | 70.00th=[16909], 80.00th=[19792], 90.00th=[21103], 95.00th=[23987], 00:19:08.059 | 99.00th=[32900], 99.50th=[36963], 99.90th=[42206], 99.95th=[42206], 00:19:08.059 | 99.99th=[42206] 00:19:08.059 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:19:08.059 slat (nsec): min=1542, max=8080.9k, avg=122861.79, stdev=574595.12 00:19:08.059 clat (usec): min=587, max=56343, avg=16424.32, stdev=11905.94 00:19:08.059 lat (usec): min=620, max=56354, avg=16547.19, stdev=11994.25 00:19:08.059 clat percentiles (usec): 00:19:08.059 | 1.00th=[ 1139], 5.00th=[ 7439], 10.00th=[ 7570], 20.00th=[ 8455], 00:19:08.059 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[11863], 60.00th=[13829], 00:19:08.059 | 70.00th=[16909], 80.00th=[20055], 90.00th=[38536], 95.00th=[46400], 00:19:08.059 | 99.00th=[53216], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:19:08.059 | 99.99th=[56361] 00:19:08.059 bw ( KiB/s): min=15224, max=17544, per=18.09%, avg=16384.00, stdev=1640.49, samples=2 00:19:08.059 iops : min= 3806, max= 4386, avg=4096.00, stdev=410.12, samples=2 00:19:08.059 lat (usec) : 750=0.09%, 1000=0.32% 00:19:08.059 lat (msec) : 2=0.56%, 4=0.33%, 10=26.94%, 20=53.75%, 50=16.63% 00:19:08.059 lat (msec) : 100=1.39% 00:19:08.059 cpu : usr=2.30%, sys=4.69%, ctx=450, majf=0, minf=1 00:19:08.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:08.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.059 issued rwts: total=4053,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.059 job2: (groupid=0, jobs=1): err= 0: pid=1953182: Mon Jun 10 11:57:01 2024 00:19:08.059 read: IOPS=4150, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1003msec) 00:19:08.059 slat (nsec): min=874, max=12375k, avg=123100.16, stdev=738504.69 00:19:08.059 clat (usec): min=2119, max=51642, avg=15476.85, stdev=9024.40 00:19:08.059 lat (usec): min=2124, max=51665, avg=15599.95, stdev=9103.05 00:19:08.059 clat percentiles (usec): 00:19:08.059 | 1.00th=[ 5538], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 9241], 00:19:08.059 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11994], 60.00th=[13435], 00:19:08.059 | 70.00th=[15533], 80.00th=[19006], 90.00th=[31851], 95.00th=[36439], 00:19:08.059 | 99.00th=[45351], 99.50th=[45876], 99.90th=[47973], 99.95th=[48497], 00:19:08.059 | 99.99th=[51643] 00:19:08.059 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:19:08.059 slat (nsec): min=1552, max=9979.6k, avg=101083.19, stdev=570286.49 00:19:08.059 clat (usec): min=4586, max=34078, avg=13533.63, stdev=5694.38 00:19:08.059 lat (usec): min=4595, max=34107, avg=13634.71, stdev=5742.09 00:19:08.059 clat percentiles (usec): 00:19:08.059 | 1.00th=[ 5669], 5.00th=[ 7504], 10.00th=[ 8160], 20.00th=[ 9372], 00:19:08.059 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10552], 60.00th=[13304], 00:19:08.059 | 70.00th=[15008], 80.00th=[18744], 90.00th=[23987], 95.00th=[25035], 00:19:08.059 | 99.00th=[26870], 99.50th=[27919], 99.90th=[32637], 99.95th=[33162], 00:19:08.059 | 99.99th=[33817] 00:19:08.059 bw ( KiB/s): min=16384, max=20000, per=20.09%, avg=18192.00, stdev=2556.90, samples=2 00:19:08.059 iops : min= 4096, max= 5000, avg=4548.00, stdev=639.22, samples=2 00:19:08.059 lat (msec) : 4=0.05%, 10=31.31%, 20=52.02%, 50=16.60%, 100=0.02% 00:19:08.059 cpu : usr=2.89%, sys=4.19%, ctx=454, majf=0, minf=1 00:19:08.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:08.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.059 issued rwts: total=4163,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.059 job3: (groupid=0, jobs=1): err= 0: pid=1953183: Mon Jun 10 11:57:01 2024 00:19:08.059 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:19:08.059 slat (nsec): min=935, max=15650k, avg=68610.96, stdev=570100.61 00:19:08.059 clat (usec): min=2714, max=52441, avg=9840.86, stdev=3917.49 00:19:08.059 lat (usec): min=2746, max=59722, avg=9909.47, stdev=3959.75 00:19:08.059 clat percentiles (usec): 00:19:08.059 | 1.00th=[ 2933], 5.00th=[ 5014], 10.00th=[ 5800], 20.00th=[ 7439], 00:19:08.059 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9896], 00:19:08.059 | 70.00th=[10945], 80.00th=[11994], 90.00th=[14484], 95.00th=[16909], 00:19:08.059 | 99.00th=[20579], 99.50th=[23200], 99.90th=[50070], 99.95th=[50070], 00:19:08.059 | 99.99th=[52691] 00:19:08.059 write: IOPS=6848, BW=26.8MiB/s (28.1MB/s)(27.0MiB/1009msec); 0 zone resets 00:19:08.059 slat (nsec): min=1661, max=16400k, avg=64900.02, stdev=489992.77 00:19:08.059 clat (usec): min=1963, max=26990, avg=8725.38, stdev=3115.80 00:19:08.060 lat (usec): min=2493, max=27001, avg=8790.28, stdev=3139.35 00:19:08.060 clat percentiles (usec): 00:19:08.060 | 1.00th=[ 2999], 5.00th=[ 4359], 10.00th=[ 5211], 20.00th=[ 6456], 00:19:08.060 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 8225], 60.00th=[ 8979], 00:19:08.060 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[12911], 95.00th=[14353], 00:19:08.060 | 99.00th=[19006], 99.50th=[19006], 99.90th=[23462], 99.95th=[23462], 00:19:08.060 | 99.99th=[26870] 00:19:08.060 bw ( KiB/s): min=25592, max=28672, per=29.96%, avg=27132.00, stdev=2177.89, samples=2 00:19:08.060 iops : min= 6398, max= 7168, avg=6783.00, stdev=544.47, samples=2 00:19:08.060 lat (msec) : 2=0.01%, 4=3.07%, 10=65.16%, 20=30.89%, 50=0.76% 00:19:08.060 lat (msec) : 100=0.11% 00:19:08.060 cpu : usr=5.36%, sys=7.14%, ctx=503, majf=0, minf=1 00:19:08.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:08.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.060 issued rwts: total=6656,6910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.060 00:19:08.060 Run status group 0 (all jobs): 00:19:08.060 READ: bw=85.3MiB/s (89.5MB/s), 15.8MiB/s-27.9MiB/s (16.6MB/s-29.3MB/s), io=86.1MiB (90.3MB), run=1003-1009msec 00:19:08.060 WRITE: bw=88.4MiB/s (92.7MB/s), 16.0MiB/s-28.2MiB/s (16.7MB/s-29.5MB/s), io=89.2MiB (93.6MB), run=1003-1009msec 00:19:08.060 00:19:08.060 Disk stats (read/write): 00:19:08.060 nvme0n1: ios=5677/5920, merge=0/0, ticks=38126/46563, in_queue=84689, util=98.90% 00:19:08.060 nvme0n2: ios=3108/3584, merge=0/0, ticks=21346/30618, in_queue=51964, util=87.97% 00:19:08.060 nvme0n3: ios=3511/3584, merge=0/0, ticks=23029/20472, in_queue=43501, util=91.68% 00:19:08.060 nvme0n4: ios=5667/5734, merge=0/0, ticks=50920/41173, in_queue=92093, util=97.55% 00:19:08.060 11:57:01 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:08.060 [global] 00:19:08.060 thread=1 00:19:08.060 invalidate=1 00:19:08.060 rw=randwrite 00:19:08.060 time_based=1 00:19:08.060 runtime=1 00:19:08.060 ioengine=libaio 00:19:08.060 direct=1 00:19:08.060 bs=4096 00:19:08.060 iodepth=128 00:19:08.060 norandommap=0 00:19:08.060 numjobs=1 00:19:08.060 00:19:08.060 verify_dump=1 00:19:08.060 verify_backlog=512 00:19:08.060 verify_state_save=0 00:19:08.060 do_verify=1 00:19:08.060 verify=crc32c-intel 00:19:08.060 [job0] 00:19:08.060 filename=/dev/nvme0n1 00:19:08.060 [job1] 00:19:08.060 filename=/dev/nvme0n2 00:19:08.060 [job2] 00:19:08.060 filename=/dev/nvme0n3 00:19:08.060 [job3] 00:19:08.060 filename=/dev/nvme0n4 00:19:08.060 Could not set queue depth (nvme0n1) 00:19:08.060 Could not set queue depth (nvme0n2) 00:19:08.060 Could not set queue depth (nvme0n3) 00:19:08.060 Could not set queue depth (nvme0n4) 00:19:08.320 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.320 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.320 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.320 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.320 fio-3.35 00:19:08.320 Starting 4 threads 00:19:09.705 00:19:09.705 job0: (groupid=0, jobs=1): err= 0: pid=1953784: Mon Jun 10 11:57:03 2024 00:19:09.705 read: IOPS=7164, BW=28.0MiB/s (29.3MB/s)(28.2MiB/1008msec) 00:19:09.705 slat (nsec): min=905, max=6609.7k, avg=61025.02, stdev=414077.86 00:19:09.705 clat (usec): min=2939, max=24309, avg=8089.49, stdev=2781.39 00:19:09.705 lat (usec): min=2941, max=24311, avg=8150.51, stdev=2804.00 00:19:09.705 clat percentiles (usec): 00:19:09.705 | 1.00th=[ 4146], 5.00th=[ 5276], 10.00th=[ 5538], 20.00th=[ 6063], 00:19:09.705 | 30.00th=[ 6390], 40.00th=[ 6783], 50.00th=[ 7439], 60.00th=[ 8029], 00:19:09.705 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[11338], 95.00th=[12780], 00:19:09.705 | 99.00th=[19792], 99.50th=[21365], 99.90th=[23200], 99.95th=[24249], 00:19:09.705 | 99.99th=[24249] 00:19:09.705 write: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec); 0 zone resets 00:19:09.705 slat (nsec): min=1526, max=15487k, avg=67512.96, stdev=440271.63 00:19:09.705 clat (usec): min=1094, max=28097, avg=9040.16, stdev=5370.51 00:19:09.705 lat (usec): min=1102, max=28106, avg=9107.67, stdev=5401.28 00:19:09.705 clat percentiles (usec): 00:19:09.705 | 1.00th=[ 2737], 5.00th=[ 3392], 10.00th=[ 3785], 20.00th=[ 4359], 00:19:09.705 | 30.00th=[ 5342], 40.00th=[ 6259], 50.00th=[ 7177], 60.00th=[ 8455], 00:19:09.705 | 70.00th=[10159], 80.00th=[15270], 90.00th=[17171], 95.00th=[18744], 00:19:09.705 | 99.00th=[24511], 99.50th=[25297], 99.90th=[27919], 99.95th=[28181], 00:19:09.705 | 99.99th=[28181] 00:19:09.705 bw ( KiB/s): min=29600, max=31256, per=34.90%, avg=30428.00, stdev=1170.97, samples=2 00:19:09.705 iops : min= 7400, max= 7814, avg=7607.00, stdev=292.74, samples=2 00:19:09.705 lat (msec) : 2=0.09%, 4=7.57%, 10=68.19%, 20=21.50%, 50=2.65% 00:19:09.706 cpu : usr=6.45%, sys=5.46%, ctx=492, majf=0, minf=1 00:19:09.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:09.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.706 issued rwts: total=7222,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.706 job1: (groupid=0, jobs=1): err= 0: pid=1953785: Mon Jun 10 11:57:03 2024 00:19:09.706 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:19:09.706 slat (nsec): min=861, max=45144k, avg=163216.37, stdev=1172859.49 00:19:09.706 clat (usec): min=5011, max=55477, avg=19964.13, stdev=13419.20 00:19:09.706 lat (usec): min=5015, max=55504, avg=20127.35, stdev=13511.20 00:19:09.706 clat percentiles (usec): 00:19:09.706 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7767], 00:19:09.706 | 30.00th=[ 9372], 40.00th=[13566], 50.00th=[14746], 60.00th=[17695], 00:19:09.706 | 70.00th=[25822], 80.00th=[33817], 90.00th=[42206], 95.00th=[43779], 00:19:09.706 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53216], 99.95th=[54264], 00:19:09.706 | 99.99th=[55313] 00:19:09.706 write: IOPS=3850, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1004msec); 0 zone resets 00:19:09.706 slat (nsec): min=1442, max=7959.3k, avg=103006.22, stdev=482301.95 00:19:09.706 clat (usec): min=3309, max=56737, avg=14401.72, stdev=11380.40 00:19:09.706 lat (usec): min=3844, max=56745, avg=14504.72, stdev=11444.50 00:19:09.706 clat percentiles (usec): 00:19:09.706 | 1.00th=[ 4047], 5.00th=[ 5014], 10.00th=[ 5866], 20.00th=[ 7701], 00:19:09.706 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[10028], 60.00th=[12125], 00:19:09.706 | 70.00th=[15270], 80.00th=[16909], 90.00th=[28181], 95.00th=[47973], 00:19:09.706 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:19:09.706 | 99.99th=[56886] 00:19:09.706 bw ( KiB/s): min= 9664, max=20248, per=17.15%, avg=14956.00, stdev=7484.02, samples=2 00:19:09.706 iops : min= 2416, max= 5062, avg=3739.00, stdev=1871.00, samples=2 00:19:09.706 lat (msec) : 4=0.31%, 10=42.54%, 20=31.52%, 50=21.92%, 100=3.72% 00:19:09.706 cpu : usr=2.29%, sys=3.39%, ctx=454, majf=0, minf=1 00:19:09.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:09.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.706 issued rwts: total=3584,3866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.706 job2: (groupid=0, jobs=1): err= 0: pid=1953794: Mon Jun 10 11:57:03 2024 00:19:09.706 read: IOPS=3091, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1004msec) 00:19:09.706 slat (nsec): min=886, max=18137k, avg=175632.31, stdev=1143668.12 00:19:09.706 clat (usec): min=2633, max=59135, avg=22265.66, stdev=13565.57 00:19:09.706 lat (usec): min=5905, max=59144, avg=22441.29, stdev=13646.12 00:19:09.706 clat percentiles (usec): 00:19:09.706 | 1.00th=[ 6915], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10552], 00:19:09.706 | 30.00th=[13829], 40.00th=[16581], 50.00th=[18744], 60.00th=[20841], 00:19:09.706 | 70.00th=[23725], 80.00th=[30540], 90.00th=[50070], 95.00th=[54264], 00:19:09.706 | 99.00th=[58983], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:19:09.706 | 99.99th=[58983] 00:19:09.706 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:19:09.706 slat (nsec): min=1501, max=18007k, avg=122354.79, stdev=651246.61 00:19:09.706 clat (usec): min=1198, max=52184, avg=16264.17, stdev=8488.03 00:19:09.706 lat (usec): min=1207, max=52191, avg=16386.53, stdev=8524.24 00:19:09.706 clat percentiles (usec): 00:19:09.706 | 1.00th=[ 3720], 5.00th=[ 7046], 10.00th=[ 7767], 20.00th=[ 8291], 00:19:09.706 | 30.00th=[11076], 40.00th=[13435], 50.00th=[15664], 60.00th=[16909], 00:19:09.706 | 70.00th=[17957], 80.00th=[20055], 90.00th=[27395], 95.00th=[34866], 00:19:09.706 | 99.00th=[41681], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:19:09.706 | 99.99th=[52167] 00:19:09.706 bw ( KiB/s): min=11528, max=16384, per=16.01%, avg=13956.00, stdev=3433.71, samples=2 00:19:09.706 iops : min= 2882, max= 4096, avg=3489.00, stdev=858.43, samples=2 00:19:09.706 lat (msec) : 2=0.21%, 4=0.58%, 10=20.29%, 20=48.25%, 50=25.90% 00:19:09.706 lat (msec) : 100=4.77% 00:19:09.706 cpu : usr=1.99%, sys=3.59%, ctx=369, majf=0, minf=1 00:19:09.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:09.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.706 issued rwts: total=3104,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.706 job3: (groupid=0, jobs=1): err= 0: pid=1953795: Mon Jun 10 11:57:03 2024 00:19:09.706 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:19:09.706 slat (nsec): min=903, max=7605.6k, avg=74015.50, stdev=505691.15 00:19:09.706 clat (usec): min=1999, max=20079, avg=9931.90, stdev=2479.63 00:19:09.706 lat (usec): min=2012, max=20089, avg=10005.92, stdev=2502.43 00:19:09.706 clat percentiles (usec): 00:19:09.706 | 1.00th=[ 3654], 5.00th=[ 5997], 10.00th=[ 7046], 20.00th=[ 8225], 00:19:09.706 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10159], 00:19:09.706 | 70.00th=[10945], 80.00th=[11600], 90.00th=[13173], 95.00th=[14353], 00:19:09.706 | 99.00th=[16188], 99.50th=[18220], 99.90th=[19530], 99.95th=[20055], 00:19:09.706 | 99.99th=[20055] 00:19:09.706 write: IOPS=6800, BW=26.6MiB/s (27.9MB/s)(26.8MiB/1009msec); 0 zone resets 00:19:09.706 slat (nsec): min=1508, max=7345.2k, avg=67481.22, stdev=426992.32 00:19:09.706 clat (usec): min=1143, max=17726, avg=9024.67, stdev=2362.80 00:19:09.706 lat (usec): min=1153, max=17758, avg=9092.15, stdev=2385.74 00:19:09.706 clat percentiles (usec): 00:19:09.706 | 1.00th=[ 2704], 5.00th=[ 4621], 10.00th=[ 5866], 20.00th=[ 7111], 00:19:09.706 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:19:09.706 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11994], 95.00th=[12387], 00:19:09.706 | 99.00th=[13829], 99.50th=[14746], 99.90th=[16188], 99.95th=[16450], 00:19:09.706 | 99.99th=[17695] 00:19:09.706 bw ( KiB/s): min=25240, max=28640, per=30.90%, avg=26940.00, stdev=2404.16, samples=2 00:19:09.706 iops : min= 6310, max= 7160, avg=6735.00, stdev=601.04, samples=2 00:19:09.706 lat (msec) : 2=0.27%, 4=1.93%, 10=58.56%, 20=39.21%, 50=0.03% 00:19:09.706 cpu : usr=5.06%, sys=5.95%, ctx=567, majf=0, minf=1 00:19:09.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:09.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.706 issued rwts: total=6656,6862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.706 00:19:09.706 Run status group 0 (all jobs): 00:19:09.706 READ: bw=79.6MiB/s (83.5MB/s), 12.1MiB/s-28.0MiB/s (12.7MB/s-29.3MB/s), io=80.3MiB (84.2MB), run=1004-1009msec 00:19:09.706 WRITE: bw=85.1MiB/s (89.3MB/s), 13.9MiB/s-29.8MiB/s (14.6MB/s-31.2MB/s), io=85.9MiB (90.1MB), run=1004-1009msec 00:19:09.706 00:19:09.706 Disk stats (read/write): 00:19:09.706 nvme0n1: ios=6174/6144, merge=0/0, ticks=48418/53597, in_queue=102015, util=93.69% 00:19:09.706 nvme0n2: ios=3113/3535, merge=0/0, ticks=21034/14916, in_queue=35950, util=95.31% 00:19:09.706 nvme0n3: ios=2911/3072, merge=0/0, ticks=20837/17781, in_queue=38618, util=85.97% 00:19:09.706 nvme0n4: ios=5334/5632, merge=0/0, ticks=33510/31561, in_queue=65071, util=89.43% 00:19:09.706 11:57:03 -- target/fio.sh@55 -- # sync 00:19:09.706 11:57:03 -- target/fio.sh@59 -- # fio_pid=1954129 00:19:09.706 11:57:03 -- target/fio.sh@61 -- # sleep 3 00:19:09.706 11:57:03 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:09.706 [global] 00:19:09.706 thread=1 00:19:09.706 invalidate=1 00:19:09.706 rw=read 00:19:09.706 time_based=1 00:19:09.706 runtime=10 00:19:09.706 ioengine=libaio 00:19:09.706 direct=1 00:19:09.706 bs=4096 00:19:09.706 iodepth=1 00:19:09.706 norandommap=1 00:19:09.706 numjobs=1 00:19:09.706 00:19:09.706 [job0] 00:19:09.706 filename=/dev/nvme0n1 00:19:09.706 [job1] 00:19:09.706 filename=/dev/nvme0n2 00:19:09.706 [job2] 00:19:09.706 filename=/dev/nvme0n3 00:19:09.706 [job3] 00:19:09.706 filename=/dev/nvme0n4 00:19:09.706 Could not set queue depth (nvme0n1) 00:19:09.706 Could not set queue depth (nvme0n2) 00:19:09.706 Could not set queue depth (nvme0n3) 00:19:09.706 Could not set queue depth (nvme0n4) 00:19:09.966 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.966 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.966 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.966 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.966 fio-3.35 00:19:09.966 Starting 4 threads 00:19:13.267 11:57:06 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:13.267 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4702208, buflen=4096 00:19:13.267 fio: pid=1954327, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.267 11:57:06 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:13.267 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=11493376, buflen=4096 00:19:13.267 fio: pid=1954326, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.267 11:57:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.267 11:57:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:13.267 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3170304, buflen=4096 00:19:13.267 fio: pid=1954324, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.267 11:57:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.267 11:57:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:13.267 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13209600, buflen=4096 00:19:13.267 fio: pid=1954325, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.267 11:57:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.267 11:57:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:13.267 00:19:13.267 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1954324: Mon Jun 10 11:57:07 2024 00:19:13.267 read: IOPS=267, BW=1071KiB/s (1096kB/s)(3096KiB/2892msec) 00:19:13.267 slat (usec): min=2, max=250, avg= 9.51, stdev=10.25 00:19:13.267 clat (usec): min=210, max=42991, avg=3724.18, stdev=11040.29 00:19:13.267 lat (usec): min=213, max=43016, avg=3733.67, stdev=11045.83 00:19:13.267 clat percentiles (usec): 00:19:13.267 | 1.00th=[ 310], 5.00th=[ 416], 10.00th=[ 445], 20.00th=[ 486], 00:19:13.267 | 30.00th=[ 510], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:19:13.267 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 627], 95.00th=[41681], 00:19:13.267 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:13.267 | 99.99th=[42730] 00:19:13.267 bw ( KiB/s): min= 96, max= 5712, per=11.77%, avg=1220.80, stdev=2510.66, samples=5 00:19:13.267 iops : min= 24, max= 1428, avg=305.20, stdev=627.66, samples=5 00:19:13.267 lat (usec) : 250=0.26%, 500=23.23%, 750=68.39%, 1000=0.13% 00:19:13.267 lat (msec) : 2=0.13%, 50=7.74% 00:19:13.267 cpu : usr=0.10%, sys=0.24%, ctx=776, majf=0, minf=1 00:19:13.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.267 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.267 issued rwts: total=775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.267 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1954325: Mon Jun 10 11:57:07 2024 00:19:13.267 read: IOPS=1050, BW=4202KiB/s (4303kB/s)(12.6MiB/3070msec) 00:19:13.267 slat (usec): min=6, max=21640, avg=54.04, stdev=706.23 00:19:13.267 clat (usec): min=225, max=1405, avg=891.24, stdev=232.99 00:19:13.267 lat (usec): min=232, max=22285, avg=945.29, stdev=740.09 00:19:13.267 clat percentiles (usec): 00:19:13.267 | 1.00th=[ 408], 5.00th=[ 529], 10.00th=[ 611], 20.00th=[ 660], 00:19:13.267 | 30.00th=[ 734], 40.00th=[ 832], 50.00th=[ 898], 60.00th=[ 947], 00:19:13.267 | 70.00th=[ 988], 80.00th=[ 1139], 90.00th=[ 1237], 95.00th=[ 1287], 00:19:13.267 | 99.00th=[ 1336], 99.50th=[ 1336], 99.90th=[ 1369], 99.95th=[ 1401], 00:19:13.267 | 99.99th=[ 1401] 00:19:13.267 bw ( KiB/s): min= 3272, max= 4832, per=40.30%, avg=4176.00, stdev=695.61, samples=5 00:19:13.267 iops : min= 818, max= 1208, avg=1044.00, stdev=173.90, samples=5 00:19:13.267 lat (usec) : 250=0.09%, 500=3.04%, 750=28.08%, 1000=41.57% 00:19:13.267 lat (msec) : 2=27.19% 00:19:13.267 cpu : usr=1.08%, sys=3.06%, ctx=3233, majf=0, minf=1 00:19:13.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.267 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.267 issued rwts: total=3226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.267 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1954326: Mon Jun 10 11:57:07 2024 00:19:13.267 read: IOPS=1031, BW=4125KiB/s (4224kB/s)(11.0MiB/2721msec) 00:19:13.267 slat (usec): min=6, max=23343, avg=38.97, stdev=524.73 00:19:13.267 clat (usec): min=401, max=4340, avg=923.97, stdev=183.14 00:19:13.267 lat (usec): min=427, max=24178, avg=962.95, stdev=553.61 00:19:13.267 clat percentiles (usec): 00:19:13.267 | 1.00th=[ 529], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 783], 00:19:13.267 | 30.00th=[ 832], 40.00th=[ 873], 50.00th=[ 938], 60.00th=[ 988], 00:19:13.267 | 70.00th=[ 1029], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[ 1188], 00:19:13.267 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1352], 99.95th=[ 1352], 00:19:13.267 | 99.99th=[ 4359] 00:19:13.267 bw ( KiB/s): min= 3968, max= 4568, per=40.62%, avg=4209.60, stdev=302.76, samples=5 00:19:13.267 iops : min= 992, max= 1142, avg=1052.40, stdev=75.69, samples=5 00:19:13.267 lat (usec) : 500=0.46%, 750=15.75%, 1000=47.03% 00:19:13.267 lat (msec) : 2=36.69%, 10=0.04% 00:19:13.267 cpu : usr=0.88%, sys=3.31%, ctx=2811, majf=0, minf=1 00:19:13.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.267 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.267 issued rwts: total=2807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.267 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1954327: Mon Jun 10 11:57:07 2024 00:19:13.267 read: IOPS=447, BW=1787KiB/s (1830kB/s)(4592KiB/2569msec) 00:19:13.267 slat (nsec): min=3529, max=42663, avg=11805.70, stdev=9593.30 00:19:13.267 clat (usec): min=367, max=43060, avg=2222.20, stdev=7477.61 00:19:13.267 lat (usec): min=371, max=43091, avg=2233.99, stdev=7479.49 00:19:13.267 clat percentiles (usec): 00:19:13.267 | 1.00th=[ 474], 5.00th=[ 594], 10.00th=[ 652], 20.00th=[ 701], 00:19:13.267 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 799], 00:19:13.267 | 70.00th=[ 865], 80.00th=[ 1004], 90.00th=[ 1205], 95.00th=[ 1254], 00:19:13.267 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:19:13.267 | 99.99th=[43254] 00:19:13.267 bw ( KiB/s): min= 96, max= 4672, per=17.69%, avg=1833.60, stdev=2384.96, samples=5 00:19:13.267 iops : min= 24, max= 1168, avg=458.40, stdev=596.24, samples=5 00:19:13.267 lat (usec) : 500=2.26%, 750=43.34%, 1000=34.12% 00:19:13.267 lat (msec) : 2=16.80%, 50=3.39% 00:19:13.267 cpu : usr=0.08%, sys=0.74%, ctx=1149, majf=0, minf=2 00:19:13.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.268 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.268 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.268 00:19:13.268 Run status group 0 (all jobs): 00:19:13.268 READ: bw=10.1MiB/s (10.6MB/s), 1071KiB/s-4202KiB/s (1096kB/s-4303kB/s), io=31.1MiB (32.6MB), run=2569-3070msec 00:19:13.268 00:19:13.268 Disk stats (read/write): 00:19:13.268 nvme0n1: ios=772/0, merge=0/0, ticks=2796/0, in_queue=2796, util=94.79% 00:19:13.268 nvme0n2: ios=3032/0, merge=0/0, ticks=2634/0, in_queue=2634, util=94.13% 00:19:13.268 nvme0n3: ios=2751/0, merge=0/0, ticks=2994/0, in_queue=2994, util=100.00% 00:19:13.268 nvme0n4: ios=1142/0, merge=0/0, ticks=2274/0, in_queue=2274, util=96.06% 00:19:13.528 11:57:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.528 11:57:07 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:13.790 11:57:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.790 11:57:07 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:13.790 11:57:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.790 11:57:07 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:14.050 11:57:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.050 11:57:07 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:14.050 11:57:07 -- target/fio.sh@69 -- # fio_status=0 00:19:14.050 11:57:07 -- target/fio.sh@70 -- # wait 1954129 00:19:14.050 11:57:07 -- target/fio.sh@70 -- # fio_status=4 00:19:14.050 11:57:07 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:14.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.311 11:57:07 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:14.311 11:57:07 -- common/autotest_common.sh@1198 -- # local i=0 00:19:14.311 11:57:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:14.311 11:57:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.311 11:57:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:14.311 11:57:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.311 11:57:07 -- common/autotest_common.sh@1210 -- # return 0 00:19:14.311 11:57:07 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:14.311 11:57:07 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:14.311 nvmf hotplug test: fio failed as expected 00:19:14.311 11:57:07 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.311 11:57:08 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:14.572 11:57:08 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:14.572 11:57:08 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:14.572 11:57:08 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:14.572 11:57:08 -- target/fio.sh@91 -- # nvmftestfini 00:19:14.572 11:57:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:14.572 11:57:08 -- nvmf/common.sh@116 -- # sync 00:19:14.572 11:57:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:14.572 11:57:08 -- nvmf/common.sh@119 -- # set +e 00:19:14.572 11:57:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:14.572 11:57:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:14.572 rmmod nvme_tcp 00:19:14.572 rmmod nvme_fabrics 00:19:14.572 rmmod nvme_keyring 00:19:14.572 11:57:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:14.572 11:57:08 -- nvmf/common.sh@123 -- # set -e 00:19:14.572 11:57:08 -- nvmf/common.sh@124 -- # return 0 00:19:14.572 11:57:08 -- nvmf/common.sh@477 -- # '[' -n 1950487 ']' 00:19:14.572 11:57:08 -- nvmf/common.sh@478 -- # killprocess 1950487 00:19:14.572 11:57:08 -- common/autotest_common.sh@926 -- # '[' -z 1950487 ']' 00:19:14.572 11:57:08 -- common/autotest_common.sh@930 -- # kill -0 1950487 00:19:14.572 11:57:08 -- common/autotest_common.sh@931 -- # uname 00:19:14.572 11:57:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:14.572 11:57:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1950487 00:19:14.572 11:57:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:14.572 11:57:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:14.572 11:57:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1950487' 00:19:14.572 killing process with pid 1950487 00:19:14.572 11:57:08 -- common/autotest_common.sh@945 -- # kill 1950487 00:19:14.572 11:57:08 -- common/autotest_common.sh@950 -- # wait 1950487 00:19:14.833 11:57:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:14.833 11:57:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:14.833 11:57:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:14.833 11:57:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:14.833 11:57:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:14.833 11:57:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.833 11:57:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.833 11:57:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.748 11:57:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:16.748 00:19:16.748 real 0m28.416s 00:19:16.748 user 2m31.324s 00:19:16.748 sys 0m9.223s 00:19:16.748 11:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.748 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:19:16.748 ************************************ 00:19:16.748 END TEST nvmf_fio_target 00:19:16.748 ************************************ 00:19:16.748 11:57:10 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:16.748 11:57:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:16.748 11:57:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:16.748 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:19:16.748 ************************************ 00:19:16.748 START TEST nvmf_bdevio 00:19:16.748 ************************************ 00:19:16.748 11:57:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:17.009 * Looking for test storage... 00:19:17.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.009 11:57:10 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.009 11:57:10 -- nvmf/common.sh@7 -- # uname -s 00:19:17.009 11:57:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.009 11:57:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.009 11:57:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.009 11:57:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.009 11:57:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.009 11:57:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.009 11:57:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.010 11:57:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.010 11:57:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.010 11:57:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.010 11:57:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.010 11:57:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.010 11:57:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.010 11:57:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.010 11:57:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.010 11:57:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.010 11:57:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.010 11:57:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.010 11:57:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.010 11:57:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.010 11:57:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.010 11:57:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.010 11:57:10 -- paths/export.sh@5 -- # export PATH 00:19:17.010 11:57:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.010 11:57:10 -- nvmf/common.sh@46 -- # : 0 00:19:17.010 11:57:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:17.010 11:57:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:17.010 11:57:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:17.010 11:57:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.010 11:57:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.010 11:57:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:17.010 11:57:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:17.010 11:57:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:17.010 11:57:10 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.010 11:57:10 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.010 11:57:10 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:17.010 11:57:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:17.010 11:57:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.010 11:57:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:17.010 11:57:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:17.010 11:57:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:17.010 11:57:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.010 11:57:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.010 11:57:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.010 11:57:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:17.010 11:57:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:17.010 11:57:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:17.010 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:19:25.151 11:57:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:25.151 11:57:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:25.151 11:57:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:25.151 11:57:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:25.151 11:57:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:25.151 11:57:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:25.151 11:57:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:25.151 11:57:17 -- nvmf/common.sh@294 -- # net_devs=() 00:19:25.151 11:57:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:25.151 11:57:17 -- nvmf/common.sh@295 -- # e810=() 00:19:25.151 11:57:17 -- nvmf/common.sh@295 -- # local -ga e810 00:19:25.151 11:57:17 -- nvmf/common.sh@296 -- # x722=() 00:19:25.151 11:57:17 -- nvmf/common.sh@296 -- # local -ga x722 00:19:25.151 11:57:17 -- nvmf/common.sh@297 -- # mlx=() 00:19:25.151 11:57:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:25.151 11:57:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.151 11:57:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.151 11:57:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.151 11:57:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.151 11:57:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.151 11:57:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.151 11:57:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.151 11:57:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.152 11:57:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.152 11:57:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.152 11:57:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.152 11:57:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:25.152 11:57:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:25.152 11:57:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:25.152 11:57:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.152 11:57:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:25.152 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:25.152 11:57:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.152 11:57:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:25.152 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:25.152 11:57:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:25.152 11:57:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.152 11:57:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.152 11:57:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.152 11:57:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.152 11:57:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:25.152 Found net devices under 0000:31:00.0: cvl_0_0 00:19:25.152 11:57:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.152 11:57:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.152 11:57:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.152 11:57:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.152 11:57:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.152 11:57:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:25.152 Found net devices under 0000:31:00.1: cvl_0_1 00:19:25.152 11:57:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.152 11:57:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:25.152 11:57:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:25.152 11:57:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:25.152 11:57:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.152 11:57:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.152 11:57:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.152 11:57:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:25.152 11:57:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.152 11:57:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.152 11:57:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:25.152 11:57:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.152 11:57:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.152 11:57:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:25.152 11:57:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:25.152 11:57:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.152 11:57:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.152 11:57:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.152 11:57:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.152 11:57:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:25.152 11:57:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.152 11:57:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.152 11:57:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.152 11:57:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:25.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:19:25.152 00:19:25.152 --- 10.0.0.2 ping statistics --- 00:19:25.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.152 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:19:25.152 11:57:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:19:25.152 00:19:25.152 --- 10.0.0.1 ping statistics --- 00:19:25.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.152 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:19:25.152 11:57:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.152 11:57:17 -- nvmf/common.sh@410 -- # return 0 00:19:25.152 11:57:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:25.152 11:57:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.152 11:57:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:25.152 11:57:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.152 11:57:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:25.152 11:57:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:25.152 11:57:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:25.152 11:57:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:25.152 11:57:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:25.152 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:19:25.152 11:57:17 -- nvmf/common.sh@469 -- # nvmfpid=1959882 00:19:25.152 11:57:17 -- nvmf/common.sh@470 -- # waitforlisten 1959882 00:19:25.152 11:57:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:25.152 11:57:17 -- common/autotest_common.sh@819 -- # '[' -z 1959882 ']' 00:19:25.152 11:57:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.152 11:57:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:25.152 11:57:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.152 11:57:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:25.152 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:19:25.152 [2024-06-10 11:57:17.999984] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:25.152 [2024-06-10 11:57:18.000052] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.152 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.152 [2024-06-10 11:57:18.086712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:25.152 [2024-06-10 11:57:18.175319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:25.152 [2024-06-10 11:57:18.175470] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.152 [2024-06-10 11:57:18.175479] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.152 [2024-06-10 11:57:18.175486] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.152 [2024-06-10 11:57:18.175660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:25.152 [2024-06-10 11:57:18.175832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:25.152 [2024-06-10 11:57:18.175997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.152 [2024-06-10 11:57:18.175997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:25.152 11:57:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:25.152 11:57:18 -- common/autotest_common.sh@852 -- # return 0 00:19:25.152 11:57:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:25.152 11:57:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:25.152 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:19:25.152 11:57:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.152 11:57:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.152 11:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.152 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:19:25.152 [2024-06-10 11:57:18.842684] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.152 11:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.152 11:57:18 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.152 11:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.152 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:19:25.152 Malloc0 00:19:25.152 11:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.152 11:57:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:25.152 11:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.152 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:19:25.152 11:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.152 11:57:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.152 11:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.152 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:19:25.152 11:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.152 11:57:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.152 11:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.152 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:19:25.152 [2024-06-10 11:57:18.895759] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.152 11:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.152 11:57:18 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:25.153 11:57:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:25.153 11:57:18 -- nvmf/common.sh@520 -- # config=() 00:19:25.153 11:57:18 -- nvmf/common.sh@520 -- # local subsystem config 00:19:25.153 11:57:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:25.153 11:57:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:25.153 { 00:19:25.153 "params": { 00:19:25.153 "name": "Nvme$subsystem", 00:19:25.153 "trtype": "$TEST_TRANSPORT", 00:19:25.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.153 "adrfam": "ipv4", 00:19:25.153 "trsvcid": "$NVMF_PORT", 00:19:25.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.153 "hdgst": ${hdgst:-false}, 00:19:25.153 "ddgst": ${ddgst:-false} 00:19:25.153 }, 00:19:25.153 "method": "bdev_nvme_attach_controller" 00:19:25.153 } 00:19:25.153 EOF 00:19:25.153 )") 00:19:25.153 11:57:18 -- nvmf/common.sh@542 -- # cat 00:19:25.153 11:57:18 -- nvmf/common.sh@544 -- # jq . 00:19:25.153 11:57:18 -- nvmf/common.sh@545 -- # IFS=, 00:19:25.153 11:57:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:25.153 "params": { 00:19:25.153 "name": "Nvme1", 00:19:25.153 "trtype": "tcp", 00:19:25.153 "traddr": "10.0.0.2", 00:19:25.153 "adrfam": "ipv4", 00:19:25.153 "trsvcid": "4420", 00:19:25.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.153 "hdgst": false, 00:19:25.153 "ddgst": false 00:19:25.153 }, 00:19:25.153 "method": "bdev_nvme_attach_controller" 00:19:25.153 }' 00:19:25.413 [2024-06-10 11:57:18.946607] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:25.414 [2024-06-10 11:57:18.946681] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1960095 ] 00:19:25.414 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.414 [2024-06-10 11:57:19.014357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:25.414 [2024-06-10 11:57:19.087792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.414 [2024-06-10 11:57:19.087915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.414 [2024-06-10 11:57:19.087918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.674 [2024-06-10 11:57:19.267823] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:25.674 [2024-06-10 11:57:19.267854] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:25.674 I/O targets: 00:19:25.674 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:25.674 00:19:25.674 00:19:25.674 CUnit - A unit testing framework for C - Version 2.1-3 00:19:25.674 http://cunit.sourceforge.net/ 00:19:25.674 00:19:25.674 00:19:25.674 Suite: bdevio tests on: Nvme1n1 00:19:25.674 Test: blockdev write read block ...passed 00:19:25.674 Test: blockdev write zeroes read block ...passed 00:19:25.674 Test: blockdev write zeroes read no split ...passed 00:19:25.674 Test: blockdev write zeroes read split ...passed 00:19:25.674 Test: blockdev write zeroes read split partial ...passed 00:19:25.674 Test: blockdev reset ...[2024-06-10 11:57:19.441631] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.674 [2024-06-10 11:57:19.441675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399080 (9): Bad file descriptor 00:19:25.935 [2024-06-10 11:57:19.457767] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.935 passed 00:19:25.935 Test: blockdev write read 8 blocks ...passed 00:19:25.935 Test: blockdev write read size > 128k ...passed 00:19:25.935 Test: blockdev write read invalid size ...passed 00:19:25.935 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.935 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.935 Test: blockdev write read max offset ...passed 00:19:25.935 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.935 Test: blockdev writev readv 8 blocks ...passed 00:19:26.196 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.196 Test: blockdev writev readv block ...passed 00:19:26.196 Test: blockdev writev readv size > 128k ...passed 00:19:26.196 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.196 Test: blockdev comparev and writev ...[2024-06-10 11:57:19.763598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.196 [2024-06-10 11:57:19.763622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.763633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.196 [2024-06-10 11:57:19.763639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.764054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.196 [2024-06-10 11:57:19.764063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.764076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.196 [2024-06-10 11:57:19.764082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.764601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.196 [2024-06-10 11:57:19.764608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.764617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.196 [2024-06-10 11:57:19.764622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.765171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.196 [2024-06-10 11:57:19.765178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.765187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.196 [2024-06-10 11:57:19.765192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:26.196 passed 00:19:26.196 Test: blockdev nvme passthru rw ...passed 00:19:26.196 Test: blockdev nvme passthru vendor specific ...[2024-06-10 11:57:19.850017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.196 [2024-06-10 11:57:19.850027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.850428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.196 [2024-06-10 11:57:19.850435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.850847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.196 [2024-06-10 11:57:19.850853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:26.196 [2024-06-10 11:57:19.851226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.196 [2024-06-10 11:57:19.851233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:26.196 passed 00:19:26.196 Test: blockdev nvme admin passthru ...passed 00:19:26.196 Test: blockdev copy ...passed 00:19:26.196 00:19:26.196 Run Summary: Type Total Ran Passed Failed Inactive 00:19:26.196 suites 1 1 n/a 0 0 00:19:26.196 tests 23 23 23 0 0 00:19:26.196 asserts 152 152 152 0 n/a 00:19:26.196 00:19:26.196 Elapsed time = 1.279 seconds 00:19:26.457 11:57:20 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:26.457 11:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.457 11:57:20 -- common/autotest_common.sh@10 -- # set +x 00:19:26.457 11:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.457 11:57:20 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:26.457 11:57:20 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:26.457 11:57:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:26.457 11:57:20 -- nvmf/common.sh@116 -- # sync 00:19:26.458 11:57:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:26.458 11:57:20 -- nvmf/common.sh@119 -- # set +e 00:19:26.458 11:57:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:26.458 11:57:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:26.458 rmmod nvme_tcp 00:19:26.458 rmmod nvme_fabrics 00:19:26.458 rmmod nvme_keyring 00:19:26.458 11:57:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:26.458 11:57:20 -- nvmf/common.sh@123 -- # set -e 00:19:26.458 11:57:20 -- nvmf/common.sh@124 -- # return 0 00:19:26.458 11:57:20 -- nvmf/common.sh@477 -- # '[' -n 1959882 ']' 00:19:26.458 11:57:20 -- nvmf/common.sh@478 -- # killprocess 1959882 00:19:26.458 11:57:20 -- common/autotest_common.sh@926 -- # '[' -z 1959882 ']' 00:19:26.458 11:57:20 -- common/autotest_common.sh@930 -- # kill -0 1959882 00:19:26.458 11:57:20 -- common/autotest_common.sh@931 -- # uname 00:19:26.458 11:57:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:26.458 11:57:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1959882 00:19:26.458 11:57:20 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:26.458 11:57:20 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:26.458 11:57:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1959882' 00:19:26.458 killing process with pid 1959882 00:19:26.458 11:57:20 -- common/autotest_common.sh@945 -- # kill 1959882 00:19:26.458 11:57:20 -- common/autotest_common.sh@950 -- # wait 1959882 00:19:26.719 11:57:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:26.719 11:57:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:26.719 11:57:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:26.719 11:57:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.719 11:57:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:26.719 11:57:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.719 11:57:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.719 11:57:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.629 11:57:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:28.629 00:19:28.629 real 0m11.879s 00:19:28.629 user 0m12.679s 00:19:28.629 sys 0m5.931s 00:19:28.629 11:57:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.629 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:19:28.629 ************************************ 00:19:28.629 END TEST nvmf_bdevio 00:19:28.629 ************************************ 00:19:28.890 11:57:22 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:28.890 11:57:22 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:28.890 11:57:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:28.890 11:57:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:28.890 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:19:28.890 ************************************ 00:19:28.890 START TEST nvmf_bdevio_no_huge 00:19:28.890 ************************************ 00:19:28.890 11:57:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:28.890 * Looking for test storage... 00:19:28.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.890 11:57:22 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.890 11:57:22 -- nvmf/common.sh@7 -- # uname -s 00:19:28.890 11:57:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.890 11:57:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.890 11:57:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.890 11:57:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.890 11:57:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.890 11:57:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.890 11:57:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.890 11:57:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.890 11:57:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.890 11:57:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.890 11:57:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:28.890 11:57:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:28.890 11:57:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.890 11:57:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.890 11:57:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.890 11:57:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.890 11:57:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.890 11:57:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.890 11:57:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.890 11:57:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.890 11:57:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.890 11:57:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.890 11:57:22 -- paths/export.sh@5 -- # export PATH 00:19:28.890 11:57:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.890 11:57:22 -- nvmf/common.sh@46 -- # : 0 00:19:28.890 11:57:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:28.890 11:57:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:28.890 11:57:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:28.890 11:57:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.890 11:57:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.890 11:57:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:28.890 11:57:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:28.890 11:57:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:28.890 11:57:22 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.890 11:57:22 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.890 11:57:22 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:28.890 11:57:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:28.890 11:57:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.890 11:57:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:28.890 11:57:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:28.890 11:57:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:28.890 11:57:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.890 11:57:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.890 11:57:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.890 11:57:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:28.890 11:57:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:28.890 11:57:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:28.890 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:19:37.039 11:57:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.039 11:57:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:37.039 11:57:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:37.039 11:57:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:37.039 11:57:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:37.039 11:57:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:37.039 11:57:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:37.039 11:57:29 -- nvmf/common.sh@294 -- # net_devs=() 00:19:37.039 11:57:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:37.039 11:57:29 -- nvmf/common.sh@295 -- # e810=() 00:19:37.039 11:57:29 -- nvmf/common.sh@295 -- # local -ga e810 00:19:37.039 11:57:29 -- nvmf/common.sh@296 -- # x722=() 00:19:37.039 11:57:29 -- nvmf/common.sh@296 -- # local -ga x722 00:19:37.039 11:57:29 -- nvmf/common.sh@297 -- # mlx=() 00:19:37.039 11:57:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:37.039 11:57:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.039 11:57:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:37.039 11:57:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:37.039 11:57:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:37.040 11:57:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:37.040 11:57:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.040 11:57:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:37.040 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:37.040 11:57:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.040 11:57:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:37.040 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:37.040 11:57:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:37.040 11:57:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.040 11:57:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.040 11:57:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.040 11:57:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.040 11:57:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:37.040 Found net devices under 0000:31:00.0: cvl_0_0 00:19:37.040 11:57:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.040 11:57:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.040 11:57:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.040 11:57:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.040 11:57:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.040 11:57:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:37.040 Found net devices under 0000:31:00.1: cvl_0_1 00:19:37.040 11:57:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.040 11:57:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:37.040 11:57:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:37.040 11:57:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:37.040 11:57:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.040 11:57:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.040 11:57:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.040 11:57:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:37.040 11:57:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.040 11:57:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.040 11:57:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:37.040 11:57:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.040 11:57:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.040 11:57:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:37.040 11:57:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:37.040 11:57:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.040 11:57:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.040 11:57:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.040 11:57:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.040 11:57:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:37.040 11:57:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.040 11:57:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.040 11:57:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.040 11:57:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:37.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:19:37.040 00:19:37.040 --- 10.0.0.2 ping statistics --- 00:19:37.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.040 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:19:37.040 11:57:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:19:37.040 00:19:37.040 --- 10.0.0.1 ping statistics --- 00:19:37.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.040 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:19:37.040 11:57:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.040 11:57:29 -- nvmf/common.sh@410 -- # return 0 00:19:37.040 11:57:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:37.040 11:57:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.040 11:57:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:37.040 11:57:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.040 11:57:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:37.040 11:57:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:37.040 11:57:29 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:37.040 11:57:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:37.040 11:57:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:37.040 11:57:29 -- common/autotest_common.sh@10 -- # set +x 00:19:37.040 11:57:29 -- nvmf/common.sh@469 -- # nvmfpid=1964653 00:19:37.040 11:57:29 -- nvmf/common.sh@470 -- # waitforlisten 1964653 00:19:37.040 11:57:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:37.040 11:57:29 -- common/autotest_common.sh@819 -- # '[' -z 1964653 ']' 00:19:37.040 11:57:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.040 11:57:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:37.040 11:57:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.040 11:57:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:37.040 11:57:29 -- common/autotest_common.sh@10 -- # set +x 00:19:37.040 [2024-06-10 11:57:29.954697] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:37.040 [2024-06-10 11:57:29.954750] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:37.040 [2024-06-10 11:57:30.044378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:37.040 [2024-06-10 11:57:30.135409] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:37.040 [2024-06-10 11:57:30.135541] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.040 [2024-06-10 11:57:30.135550] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.040 [2024-06-10 11:57:30.135558] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.040 [2024-06-10 11:57:30.135700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:37.040 [2024-06-10 11:57:30.135734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:37.040 [2024-06-10 11:57:30.135869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.040 [2024-06-10 11:57:30.135870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:37.040 11:57:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.040 11:57:30 -- common/autotest_common.sh@852 -- # return 0 00:19:37.040 11:57:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:37.040 11:57:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:37.040 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:19:37.040 11:57:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.040 11:57:30 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:37.040 11:57:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.040 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:19:37.040 [2024-06-10 11:57:30.771659] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.040 11:57:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.040 11:57:30 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:37.040 11:57:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.040 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:19:37.040 Malloc0 00:19:37.040 11:57:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.040 11:57:30 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:37.040 11:57:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.040 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:19:37.040 11:57:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.040 11:57:30 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:37.040 11:57:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.040 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:19:37.346 11:57:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.346 11:57:30 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:37.346 11:57:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.346 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:19:37.346 [2024-06-10 11:57:30.823982] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.346 11:57:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.346 11:57:30 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:37.346 11:57:30 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:37.346 11:57:30 -- nvmf/common.sh@520 -- # config=() 00:19:37.346 11:57:30 -- nvmf/common.sh@520 -- # local subsystem config 00:19:37.346 11:57:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:37.346 11:57:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:37.346 { 00:19:37.346 "params": { 00:19:37.346 "name": "Nvme$subsystem", 00:19:37.346 "trtype": "$TEST_TRANSPORT", 00:19:37.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.346 "adrfam": "ipv4", 00:19:37.346 "trsvcid": "$NVMF_PORT", 00:19:37.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.346 "hdgst": ${hdgst:-false}, 00:19:37.346 "ddgst": ${ddgst:-false} 00:19:37.346 }, 00:19:37.346 "method": "bdev_nvme_attach_controller" 00:19:37.346 } 00:19:37.346 EOF 00:19:37.346 )") 00:19:37.346 11:57:30 -- nvmf/common.sh@542 -- # cat 00:19:37.346 11:57:30 -- nvmf/common.sh@544 -- # jq . 00:19:37.346 11:57:30 -- nvmf/common.sh@545 -- # IFS=, 00:19:37.346 11:57:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:37.346 "params": { 00:19:37.346 "name": "Nvme1", 00:19:37.346 "trtype": "tcp", 00:19:37.346 "traddr": "10.0.0.2", 00:19:37.346 "adrfam": "ipv4", 00:19:37.346 "trsvcid": "4420", 00:19:37.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.346 "hdgst": false, 00:19:37.346 "ddgst": false 00:19:37.346 }, 00:19:37.346 "method": "bdev_nvme_attach_controller" 00:19:37.346 }' 00:19:37.346 [2024-06-10 11:57:30.885272] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:37.346 [2024-06-10 11:57:30.885337] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1964693 ] 00:19:37.346 [2024-06-10 11:57:30.951639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:37.347 [2024-06-10 11:57:31.043346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.347 [2024-06-10 11:57:31.043462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.347 [2024-06-10 11:57:31.043465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.631 [2024-06-10 11:57:31.224347] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:37.631 [2024-06-10 11:57:31.224373] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:37.631 I/O targets: 00:19:37.631 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:37.631 00:19:37.631 00:19:37.631 CUnit - A unit testing framework for C - Version 2.1-3 00:19:37.631 http://cunit.sourceforge.net/ 00:19:37.631 00:19:37.631 00:19:37.631 Suite: bdevio tests on: Nvme1n1 00:19:37.631 Test: blockdev write read block ...passed 00:19:37.631 Test: blockdev write zeroes read block ...passed 00:19:37.631 Test: blockdev write zeroes read no split ...passed 00:19:37.631 Test: blockdev write zeroes read split ...passed 00:19:37.631 Test: blockdev write zeroes read split partial ...passed 00:19:37.631 Test: blockdev reset ...[2024-06-10 11:57:31.354796] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.631 [2024-06-10 11:57:31.354845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1989480 (9): Bad file descriptor 00:19:37.631 [2024-06-10 11:57:31.374247] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:37.631 passed 00:19:37.891 Test: blockdev write read 8 blocks ...passed 00:19:37.891 Test: blockdev write read size > 128k ...passed 00:19:37.891 Test: blockdev write read invalid size ...passed 00:19:37.892 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:37.892 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:37.892 Test: blockdev write read max offset ...passed 00:19:37.892 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:37.892 Test: blockdev writev readv 8 blocks ...passed 00:19:37.892 Test: blockdev writev readv 30 x 1block ...passed 00:19:37.892 Test: blockdev writev readv block ...passed 00:19:37.892 Test: blockdev writev readv size > 128k ...passed 00:19:37.892 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:37.892 Test: blockdev comparev and writev ...[2024-06-10 11:57:31.642196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.892 [2024-06-10 11:57:31.642219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.892 [2024-06-10 11:57:31.642230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.892 [2024-06-10 11:57:31.642235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:37.892 [2024-06-10 11:57:31.642804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.892 [2024-06-10 11:57:31.642812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:37.892 [2024-06-10 11:57:31.642822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.892 [2024-06-10 11:57:31.642827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:37.892 [2024-06-10 11:57:31.643356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.892 [2024-06-10 11:57:31.643367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:37.892 [2024-06-10 11:57:31.643376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.892 [2024-06-10 11:57:31.643381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:37.892 [2024-06-10 11:57:31.643922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.892 [2024-06-10 11:57:31.643929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:37.892 [2024-06-10 11:57:31.643939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.892 [2024-06-10 11:57:31.643944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:38.152 passed 00:19:38.152 Test: blockdev nvme passthru rw ...passed 00:19:38.152 Test: blockdev nvme passthru vendor specific ...[2024-06-10 11:57:31.729164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:38.152 [2024-06-10 11:57:31.729173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:38.152 [2024-06-10 11:57:31.729585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:38.152 [2024-06-10 11:57:31.729592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:38.152 [2024-06-10 11:57:31.729986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:38.152 [2024-06-10 11:57:31.729993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:38.152 [2024-06-10 11:57:31.730352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:38.152 [2024-06-10 11:57:31.730359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:38.152 passed 00:19:38.152 Test: blockdev nvme admin passthru ...passed 00:19:38.152 Test: blockdev copy ...passed 00:19:38.152 00:19:38.152 Run Summary: Type Total Ran Passed Failed Inactive 00:19:38.152 suites 1 1 n/a 0 0 00:19:38.152 tests 23 23 23 0 0 00:19:38.152 asserts 152 152 152 0 n/a 00:19:38.152 00:19:38.152 Elapsed time = 1.138 seconds 00:19:38.413 11:57:32 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.413 11:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.413 11:57:32 -- common/autotest_common.sh@10 -- # set +x 00:19:38.413 11:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.413 11:57:32 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:38.413 11:57:32 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:38.413 11:57:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:38.413 11:57:32 -- nvmf/common.sh@116 -- # sync 00:19:38.413 11:57:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:38.413 11:57:32 -- nvmf/common.sh@119 -- # set +e 00:19:38.413 11:57:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:38.413 11:57:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:38.413 rmmod nvme_tcp 00:19:38.413 rmmod nvme_fabrics 00:19:38.413 rmmod nvme_keyring 00:19:38.413 11:57:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:38.413 11:57:32 -- nvmf/common.sh@123 -- # set -e 00:19:38.413 11:57:32 -- nvmf/common.sh@124 -- # return 0 00:19:38.413 11:57:32 -- nvmf/common.sh@477 -- # '[' -n 1964653 ']' 00:19:38.414 11:57:32 -- nvmf/common.sh@478 -- # killprocess 1964653 00:19:38.414 11:57:32 -- common/autotest_common.sh@926 -- # '[' -z 1964653 ']' 00:19:38.414 11:57:32 -- common/autotest_common.sh@930 -- # kill -0 1964653 00:19:38.414 11:57:32 -- common/autotest_common.sh@931 -- # uname 00:19:38.414 11:57:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:38.414 11:57:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1964653 00:19:38.674 11:57:32 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:38.674 11:57:32 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:38.674 11:57:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1964653' 00:19:38.674 killing process with pid 1964653 00:19:38.674 11:57:32 -- common/autotest_common.sh@945 -- # kill 1964653 00:19:38.674 11:57:32 -- common/autotest_common.sh@950 -- # wait 1964653 00:19:38.934 11:57:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:38.934 11:57:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:38.934 11:57:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:38.934 11:57:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.934 11:57:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:38.934 11:57:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.934 11:57:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.934 11:57:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.847 11:57:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:40.847 00:19:40.847 real 0m12.136s 00:19:40.847 user 0m13.292s 00:19:40.847 sys 0m6.313s 00:19:40.847 11:57:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.847 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:19:40.847 ************************************ 00:19:40.847 END TEST nvmf_bdevio_no_huge 00:19:40.847 ************************************ 00:19:40.847 11:57:34 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:40.847 11:57:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:40.847 11:57:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:40.847 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:19:40.847 ************************************ 00:19:40.847 START TEST nvmf_tls 00:19:40.847 ************************************ 00:19:40.847 11:57:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:41.109 * Looking for test storage... 00:19:41.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.109 11:57:34 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.109 11:57:34 -- nvmf/common.sh@7 -- # uname -s 00:19:41.109 11:57:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.109 11:57:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.109 11:57:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.109 11:57:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.109 11:57:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.109 11:57:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.109 11:57:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.109 11:57:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.109 11:57:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.109 11:57:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.109 11:57:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:41.109 11:57:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:41.109 11:57:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.109 11:57:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.109 11:57:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.109 11:57:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.109 11:57:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.109 11:57:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.109 11:57:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.109 11:57:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.109 11:57:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.109 11:57:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.109 11:57:34 -- paths/export.sh@5 -- # export PATH 00:19:41.109 11:57:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.109 11:57:34 -- nvmf/common.sh@46 -- # : 0 00:19:41.109 11:57:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:41.109 11:57:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:41.109 11:57:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:41.109 11:57:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.109 11:57:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.109 11:57:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:41.109 11:57:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:41.109 11:57:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:41.109 11:57:34 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.109 11:57:34 -- target/tls.sh@71 -- # nvmftestinit 00:19:41.109 11:57:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:41.109 11:57:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.109 11:57:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:41.109 11:57:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:41.109 11:57:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:41.109 11:57:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.109 11:57:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.109 11:57:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.109 11:57:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:41.109 11:57:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:41.109 11:57:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:41.109 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:19:49.252 11:57:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:49.252 11:57:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:49.252 11:57:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:49.252 11:57:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:49.252 11:57:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:49.252 11:57:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:49.252 11:57:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:49.252 11:57:41 -- nvmf/common.sh@294 -- # net_devs=() 00:19:49.252 11:57:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:49.252 11:57:41 -- nvmf/common.sh@295 -- # e810=() 00:19:49.252 11:57:41 -- nvmf/common.sh@295 -- # local -ga e810 00:19:49.252 11:57:41 -- nvmf/common.sh@296 -- # x722=() 00:19:49.252 11:57:41 -- nvmf/common.sh@296 -- # local -ga x722 00:19:49.252 11:57:41 -- nvmf/common.sh@297 -- # mlx=() 00:19:49.252 11:57:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:49.252 11:57:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.252 11:57:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:49.252 11:57:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:49.252 11:57:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:49.252 11:57:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.252 11:57:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:49.252 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:49.252 11:57:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.252 11:57:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:49.252 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:49.252 11:57:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:49.252 11:57:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.252 11:57:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.252 11:57:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.252 11:57:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.252 11:57:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:49.252 Found net devices under 0000:31:00.0: cvl_0_0 00:19:49.252 11:57:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.252 11:57:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.252 11:57:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.252 11:57:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.252 11:57:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.252 11:57:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:49.252 Found net devices under 0000:31:00.1: cvl_0_1 00:19:49.252 11:57:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.252 11:57:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:49.252 11:57:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:49.252 11:57:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:49.252 11:57:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:49.252 11:57:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.252 11:57:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.252 11:57:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.252 11:57:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:49.252 11:57:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.252 11:57:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.252 11:57:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:49.252 11:57:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.252 11:57:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.252 11:57:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:49.252 11:57:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:49.252 11:57:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.252 11:57:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.252 11:57:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.252 11:57:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.252 11:57:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:49.252 11:57:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.252 11:57:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.252 11:57:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.252 11:57:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:49.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:19:49.252 00:19:49.252 --- 10.0.0.2 ping statistics --- 00:19:49.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.252 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:19:49.252 11:57:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:19:49.252 00:19:49.252 --- 10.0.0.1 ping statistics --- 00:19:49.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.252 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:19:49.252 11:57:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.252 11:57:42 -- nvmf/common.sh@410 -- # return 0 00:19:49.252 11:57:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.252 11:57:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.252 11:57:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:49.252 11:57:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:49.252 11:57:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.252 11:57:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:49.252 11:57:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:49.252 11:57:42 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:49.252 11:57:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.252 11:57:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:49.252 11:57:42 -- common/autotest_common.sh@10 -- # set +x 00:19:49.252 11:57:42 -- nvmf/common.sh@469 -- # nvmfpid=1969392 00:19:49.252 11:57:42 -- nvmf/common.sh@470 -- # waitforlisten 1969392 00:19:49.252 11:57:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:49.252 11:57:42 -- common/autotest_common.sh@819 -- # '[' -z 1969392 ']' 00:19:49.252 11:57:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.252 11:57:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:49.252 11:57:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.252 11:57:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:49.252 11:57:42 -- common/autotest_common.sh@10 -- # set +x 00:19:49.252 [2024-06-10 11:57:42.110573] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:49.252 [2024-06-10 11:57:42.110632] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.252 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.252 [2024-06-10 11:57:42.198375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.252 [2024-06-10 11:57:42.290523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.252 [2024-06-10 11:57:42.290672] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.252 [2024-06-10 11:57:42.290681] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.252 [2024-06-10 11:57:42.290688] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.252 [2024-06-10 11:57:42.290721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.252 11:57:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:49.252 11:57:42 -- common/autotest_common.sh@852 -- # return 0 00:19:49.252 11:57:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.252 11:57:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:49.253 11:57:42 -- common/autotest_common.sh@10 -- # set +x 00:19:49.253 11:57:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.253 11:57:42 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:49.253 11:57:42 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:49.514 true 00:19:49.514 11:57:43 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:49.514 11:57:43 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:49.514 11:57:43 -- target/tls.sh@82 -- # version=0 00:19:49.514 11:57:43 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:49.514 11:57:43 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:49.775 11:57:43 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:49.775 11:57:43 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:50.036 11:57:43 -- target/tls.sh@90 -- # version=13 00:19:50.036 11:57:43 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:50.036 11:57:43 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:50.036 11:57:43 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.036 11:57:43 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:50.297 11:57:43 -- target/tls.sh@98 -- # version=7 00:19:50.297 11:57:43 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:50.297 11:57:43 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.297 11:57:43 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:50.297 11:57:44 -- target/tls.sh@105 -- # ktls=false 00:19:50.297 11:57:44 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:50.297 11:57:44 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:50.558 11:57:44 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.558 11:57:44 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:50.819 11:57:44 -- target/tls.sh@113 -- # ktls=true 00:19:50.819 11:57:44 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:50.819 11:57:44 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:50.819 11:57:44 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.819 11:57:44 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:51.080 11:57:44 -- target/tls.sh@121 -- # ktls=false 00:19:51.080 11:57:44 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:51.080 11:57:44 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:51.080 11:57:44 -- target/tls.sh@49 -- # local key hash crc 00:19:51.080 11:57:44 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:51.080 11:57:44 -- target/tls.sh@51 -- # hash=01 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # gzip -1 -c 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # tail -c8 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # head -c 4 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # crc='p$H�' 00:19:51.080 11:57:44 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:51.080 11:57:44 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:51.080 11:57:44 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:51.080 11:57:44 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:51.080 11:57:44 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:51.080 11:57:44 -- target/tls.sh@49 -- # local key hash crc 00:19:51.080 11:57:44 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:51.080 11:57:44 -- target/tls.sh@51 -- # hash=01 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # gzip -1 -c 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # tail -c8 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # head -c 4 00:19:51.080 11:57:44 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:51.080 11:57:44 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:51.080 11:57:44 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:51.080 11:57:44 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:51.080 11:57:44 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:51.080 11:57:44 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.080 11:57:44 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:51.080 11:57:44 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:51.080 11:57:44 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:51.080 11:57:44 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.081 11:57:44 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:51.081 11:57:44 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:51.342 11:57:44 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:51.603 11:57:45 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.603 11:57:45 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.603 11:57:45 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.603 [2024-06-10 11:57:45.318174] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.603 11:57:45 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.863 11:57:45 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:51.863 [2024-06-10 11:57:45.602864] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:51.863 [2024-06-10 11:57:45.603019] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.863 11:57:45 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.124 malloc0 00:19:52.124 11:57:45 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.124 11:57:45 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.384 11:57:46 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.384 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.383 Initializing NVMe Controllers 00:20:02.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:02.383 Initialization complete. Launching workers. 00:20:02.383 ======================================================== 00:20:02.383 Latency(us) 00:20:02.383 Device Information : IOPS MiB/s Average min max 00:20:02.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19668.66 76.83 3253.90 1061.74 4437.22 00:20:02.383 ======================================================== 00:20:02.383 Total : 19668.66 76.83 3253.90 1061.74 4437.22 00:20:02.383 00:20:02.383 11:57:56 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:02.383 11:57:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:02.383 11:57:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:02.383 11:57:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:02.384 11:57:56 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:02.384 11:57:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:02.384 11:57:56 -- target/tls.sh@28 -- # bdevperf_pid=1972206 00:20:02.384 11:57:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.384 11:57:56 -- target/tls.sh@31 -- # waitforlisten 1972206 /var/tmp/bdevperf.sock 00:20:02.384 11:57:56 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:02.384 11:57:56 -- common/autotest_common.sh@819 -- # '[' -z 1972206 ']' 00:20:02.384 11:57:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.384 11:57:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:02.384 11:57:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.384 11:57:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:02.384 11:57:56 -- common/autotest_common.sh@10 -- # set +x 00:20:02.644 [2024-06-10 11:57:56.177313] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:02.644 [2024-06-10 11:57:56.177380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972206 ] 00:20:02.644 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.644 [2024-06-10 11:57:56.228741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.644 [2024-06-10 11:57:56.279365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.216 11:57:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:03.216 11:57:56 -- common/autotest_common.sh@852 -- # return 0 00:20:03.216 11:57:56 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:03.476 [2024-06-10 11:57:57.064587] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.476 TLSTESTn1 00:20:03.476 11:57:57 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:03.476 Running I/O for 10 seconds... 00:20:15.711 00:20:15.711 Latency(us) 00:20:15.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.711 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.711 Verification LBA range: start 0x0 length 0x2000 00:20:15.711 TLSTESTn1 : 10.03 2980.93 11.64 0.00 0.00 42889.29 3577.17 63351.47 00:20:15.711 =================================================================================================================== 00:20:15.711 Total : 2980.93 11.64 0.00 0.00 42889.29 3577.17 63351.47 00:20:15.711 0 00:20:15.711 11:58:07 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.711 11:58:07 -- target/tls.sh@45 -- # killprocess 1972206 00:20:15.711 11:58:07 -- common/autotest_common.sh@926 -- # '[' -z 1972206 ']' 00:20:15.711 11:58:07 -- common/autotest_common.sh@930 -- # kill -0 1972206 00:20:15.711 11:58:07 -- common/autotest_common.sh@931 -- # uname 00:20:15.711 11:58:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:15.711 11:58:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1972206 00:20:15.711 11:58:07 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:15.711 11:58:07 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:15.711 11:58:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1972206' 00:20:15.711 killing process with pid 1972206 00:20:15.711 11:58:07 -- common/autotest_common.sh@945 -- # kill 1972206 00:20:15.711 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.711 00:20:15.711 Latency(us) 00:20:15.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.711 =================================================================================================================== 00:20:15.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.711 11:58:07 -- common/autotest_common.sh@950 -- # wait 1972206 00:20:15.711 11:58:07 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:15.711 11:58:07 -- common/autotest_common.sh@640 -- # local es=0 00:20:15.711 11:58:07 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:15.711 11:58:07 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:15.711 11:58:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.711 11:58:07 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:15.711 11:58:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.711 11:58:07 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:15.711 11:58:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.711 11:58:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.711 11:58:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.711 11:58:07 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:15.711 11:58:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.711 11:58:07 -- target/tls.sh@28 -- # bdevperf_pid=1974276 00:20:15.711 11:58:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.711 11:58:07 -- target/tls.sh@31 -- # waitforlisten 1974276 /var/tmp/bdevperf.sock 00:20:15.711 11:58:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.711 11:58:07 -- common/autotest_common.sh@819 -- # '[' -z 1974276 ']' 00:20:15.711 11:58:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.711 11:58:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:15.711 11:58:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.711 11:58:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:15.711 11:58:07 -- common/autotest_common.sh@10 -- # set +x 00:20:15.711 [2024-06-10 11:58:07.533303] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:15.711 [2024-06-10 11:58:07.533359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974276 ] 00:20:15.711 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.711 [2024-06-10 11:58:07.583782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.711 [2024-06-10 11:58:07.633742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.711 11:58:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:15.711 11:58:08 -- common/autotest_common.sh@852 -- # return 0 00:20:15.711 11:58:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:15.711 [2024-06-10 11:58:08.434975] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.711 [2024-06-10 11:58:08.439704] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:15.711 [2024-06-10 11:58:08.439868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c02a00 (107): Transport endpoint is not connected 00:20:15.711 [2024-06-10 11:58:08.440862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c02a00 (9): Bad file descriptor 00:20:15.711 [2024-06-10 11:58:08.441863] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:15.711 [2024-06-10 11:58:08.441870] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:15.711 [2024-06-10 11:58:08.441876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:15.711 request: 00:20:15.711 { 00:20:15.711 "name": "TLSTEST", 00:20:15.711 "trtype": "tcp", 00:20:15.711 "traddr": "10.0.0.2", 00:20:15.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.711 "adrfam": "ipv4", 00:20:15.711 "trsvcid": "4420", 00:20:15.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.711 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:15.711 "method": "bdev_nvme_attach_controller", 00:20:15.711 "req_id": 1 00:20:15.711 } 00:20:15.711 Got JSON-RPC error response 00:20:15.711 response: 00:20:15.711 { 00:20:15.711 "code": -32602, 00:20:15.711 "message": "Invalid parameters" 00:20:15.711 } 00:20:15.711 11:58:08 -- target/tls.sh@36 -- # killprocess 1974276 00:20:15.711 11:58:08 -- common/autotest_common.sh@926 -- # '[' -z 1974276 ']' 00:20:15.711 11:58:08 -- common/autotest_common.sh@930 -- # kill -0 1974276 00:20:15.711 11:58:08 -- common/autotest_common.sh@931 -- # uname 00:20:15.712 11:58:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:15.712 11:58:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1974276 00:20:15.712 11:58:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:15.712 11:58:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:15.712 11:58:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1974276' 00:20:15.712 killing process with pid 1974276 00:20:15.712 11:58:08 -- common/autotest_common.sh@945 -- # kill 1974276 00:20:15.712 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.712 00:20:15.712 Latency(us) 00:20:15.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.712 =================================================================================================================== 00:20:15.712 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:15.712 11:58:08 -- common/autotest_common.sh@950 -- # wait 1974276 00:20:15.712 11:58:08 -- target/tls.sh@37 -- # return 1 00:20:15.712 11:58:08 -- common/autotest_common.sh@643 -- # es=1 00:20:15.712 11:58:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:15.712 11:58:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:15.712 11:58:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:15.712 11:58:08 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:15.712 11:58:08 -- common/autotest_common.sh@640 -- # local es=0 00:20:15.712 11:58:08 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:15.712 11:58:08 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:15.712 11:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.712 11:58:08 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:15.712 11:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.712 11:58:08 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:15.712 11:58:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.712 11:58:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.712 11:58:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:15.712 11:58:08 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:15.712 11:58:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.712 11:58:08 -- target/tls.sh@28 -- # bdevperf_pid=1974592 00:20:15.712 11:58:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.712 11:58:08 -- target/tls.sh@31 -- # waitforlisten 1974592 /var/tmp/bdevperf.sock 00:20:15.712 11:58:08 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.712 11:58:08 -- common/autotest_common.sh@819 -- # '[' -z 1974592 ']' 00:20:15.712 11:58:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.712 11:58:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:15.712 11:58:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.712 11:58:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:15.712 11:58:08 -- common/autotest_common.sh@10 -- # set +x 00:20:15.712 [2024-06-10 11:58:08.674388] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:15.712 [2024-06-10 11:58:08.674444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974592 ] 00:20:15.712 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.712 [2024-06-10 11:58:08.725011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.712 [2024-06-10 11:58:08.773308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.712 11:58:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:15.712 11:58:09 -- common/autotest_common.sh@852 -- # return 0 00:20:15.712 11:58:09 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:15.973 [2024-06-10 11:58:09.578523] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.973 [2024-06-10 11:58:09.582957] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:15.973 [2024-06-10 11:58:09.582975] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:15.973 [2024-06-10 11:58:09.582995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:15.973 [2024-06-10 11:58:09.583642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224ba00 (107): Transport endpoint is not connected 00:20:15.973 [2024-06-10 11:58:09.584637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224ba00 (9): Bad file descriptor 00:20:15.973 [2024-06-10 11:58:09.585639] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:15.973 [2024-06-10 11:58:09.585650] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:15.973 [2024-06-10 11:58:09.585657] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:15.973 request: 00:20:15.973 { 00:20:15.973 "name": "TLSTEST", 00:20:15.973 "trtype": "tcp", 00:20:15.973 "traddr": "10.0.0.2", 00:20:15.973 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:15.973 "adrfam": "ipv4", 00:20:15.973 "trsvcid": "4420", 00:20:15.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.973 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:15.973 "method": "bdev_nvme_attach_controller", 00:20:15.973 "req_id": 1 00:20:15.973 } 00:20:15.973 Got JSON-RPC error response 00:20:15.973 response: 00:20:15.973 { 00:20:15.973 "code": -32602, 00:20:15.973 "message": "Invalid parameters" 00:20:15.973 } 00:20:15.973 11:58:09 -- target/tls.sh@36 -- # killprocess 1974592 00:20:15.973 11:58:09 -- common/autotest_common.sh@926 -- # '[' -z 1974592 ']' 00:20:15.973 11:58:09 -- common/autotest_common.sh@930 -- # kill -0 1974592 00:20:15.973 11:58:09 -- common/autotest_common.sh@931 -- # uname 00:20:15.973 11:58:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:15.973 11:58:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1974592 00:20:15.973 11:58:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:15.973 11:58:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:15.973 11:58:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1974592' 00:20:15.973 killing process with pid 1974592 00:20:15.973 11:58:09 -- common/autotest_common.sh@945 -- # kill 1974592 00:20:15.973 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.973 00:20:15.973 Latency(us) 00:20:15.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.973 =================================================================================================================== 00:20:15.973 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:15.973 11:58:09 -- common/autotest_common.sh@950 -- # wait 1974592 00:20:16.234 11:58:09 -- target/tls.sh@37 -- # return 1 00:20:16.234 11:58:09 -- common/autotest_common.sh@643 -- # es=1 00:20:16.234 11:58:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:16.234 11:58:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:16.234 11:58:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:16.234 11:58:09 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:16.234 11:58:09 -- common/autotest_common.sh@640 -- # local es=0 00:20:16.234 11:58:09 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:16.234 11:58:09 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:16.234 11:58:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:16.234 11:58:09 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:16.234 11:58:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:16.234 11:58:09 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:16.234 11:58:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:16.234 11:58:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:16.234 11:58:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:16.234 11:58:09 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:16.234 11:58:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:16.234 11:58:09 -- target/tls.sh@28 -- # bdevperf_pid=1974906 00:20:16.234 11:58:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:16.234 11:58:09 -- target/tls.sh@31 -- # waitforlisten 1974906 /var/tmp/bdevperf.sock 00:20:16.234 11:58:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:16.234 11:58:09 -- common/autotest_common.sh@819 -- # '[' -z 1974906 ']' 00:20:16.234 11:58:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.234 11:58:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:16.234 11:58:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.234 11:58:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:16.234 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:20:16.234 [2024-06-10 11:58:09.823526] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:16.234 [2024-06-10 11:58:09.823582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974906 ] 00:20:16.234 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.234 [2024-06-10 11:58:09.874144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.234 [2024-06-10 11:58:09.922574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.175 11:58:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:17.175 11:58:10 -- common/autotest_common.sh@852 -- # return 0 00:20:17.175 11:58:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:17.175 [2024-06-10 11:58:10.719774] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.175 [2024-06-10 11:58:10.725124] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:17.175 [2024-06-10 11:58:10.725142] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:17.175 [2024-06-10 11:58:10.725161] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:17.175 [2024-06-10 11:58:10.725647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1134a00 (107): Transport endpoint is not connected 00:20:17.175 [2024-06-10 11:58:10.726642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1134a00 (9): Bad file descriptor 00:20:17.175 [2024-06-10 11:58:10.727644] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:17.175 [2024-06-10 11:58:10.727656] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:17.175 [2024-06-10 11:58:10.727663] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:17.175 request: 00:20:17.175 { 00:20:17.175 "name": "TLSTEST", 00:20:17.175 "trtype": "tcp", 00:20:17.175 "traddr": "10.0.0.2", 00:20:17.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.175 "adrfam": "ipv4", 00:20:17.175 "trsvcid": "4420", 00:20:17.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:17.175 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:17.175 "method": "bdev_nvme_attach_controller", 00:20:17.175 "req_id": 1 00:20:17.175 } 00:20:17.175 Got JSON-RPC error response 00:20:17.175 response: 00:20:17.175 { 00:20:17.175 "code": -32602, 00:20:17.175 "message": "Invalid parameters" 00:20:17.175 } 00:20:17.175 11:58:10 -- target/tls.sh@36 -- # killprocess 1974906 00:20:17.175 11:58:10 -- common/autotest_common.sh@926 -- # '[' -z 1974906 ']' 00:20:17.175 11:58:10 -- common/autotest_common.sh@930 -- # kill -0 1974906 00:20:17.175 11:58:10 -- common/autotest_common.sh@931 -- # uname 00:20:17.175 11:58:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.175 11:58:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1974906 00:20:17.175 11:58:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:17.175 11:58:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:17.175 11:58:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1974906' 00:20:17.175 killing process with pid 1974906 00:20:17.175 11:58:10 -- common/autotest_common.sh@945 -- # kill 1974906 00:20:17.175 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.175 00:20:17.175 Latency(us) 00:20:17.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.175 =================================================================================================================== 00:20:17.175 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:17.175 11:58:10 -- common/autotest_common.sh@950 -- # wait 1974906 00:20:17.175 11:58:10 -- target/tls.sh@37 -- # return 1 00:20:17.175 11:58:10 -- common/autotest_common.sh@643 -- # es=1 00:20:17.175 11:58:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:17.175 11:58:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:17.175 11:58:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:17.175 11:58:10 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:17.176 11:58:10 -- common/autotest_common.sh@640 -- # local es=0 00:20:17.176 11:58:10 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:17.176 11:58:10 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:17.176 11:58:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.176 11:58:10 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:17.176 11:58:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.176 11:58:10 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:17.176 11:58:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:17.176 11:58:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:17.176 11:58:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:17.176 11:58:10 -- target/tls.sh@23 -- # psk= 00:20:17.176 11:58:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.176 11:58:10 -- target/tls.sh@28 -- # bdevperf_pid=1974981 00:20:17.176 11:58:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.176 11:58:10 -- target/tls.sh@31 -- # waitforlisten 1974981 /var/tmp/bdevperf.sock 00:20:17.176 11:58:10 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.176 11:58:10 -- common/autotest_common.sh@819 -- # '[' -z 1974981 ']' 00:20:17.176 11:58:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.176 11:58:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.176 11:58:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.176 11:58:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.176 11:58:10 -- common/autotest_common.sh@10 -- # set +x 00:20:17.436 [2024-06-10 11:58:10.961787] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:17.436 [2024-06-10 11:58:10.961842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974981 ] 00:20:17.436 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.436 [2024-06-10 11:58:11.011219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.436 [2024-06-10 11:58:11.062139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.006 11:58:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.006 11:58:11 -- common/autotest_common.sh@852 -- # return 0 00:20:18.006 11:58:11 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:18.267 [2024-06-10 11:58:11.857900] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:18.267 [2024-06-10 11:58:11.859904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc7340 (9): Bad file descriptor 00:20:18.267 [2024-06-10 11:58:11.860903] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.267 [2024-06-10 11:58:11.860910] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:18.267 [2024-06-10 11:58:11.860916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.267 request: 00:20:18.267 { 00:20:18.267 "name": "TLSTEST", 00:20:18.267 "trtype": "tcp", 00:20:18.267 "traddr": "10.0.0.2", 00:20:18.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.267 "adrfam": "ipv4", 00:20:18.267 "trsvcid": "4420", 00:20:18.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.267 "method": "bdev_nvme_attach_controller", 00:20:18.267 "req_id": 1 00:20:18.267 } 00:20:18.267 Got JSON-RPC error response 00:20:18.267 response: 00:20:18.267 { 00:20:18.267 "code": -32602, 00:20:18.267 "message": "Invalid parameters" 00:20:18.267 } 00:20:18.267 11:58:11 -- target/tls.sh@36 -- # killprocess 1974981 00:20:18.267 11:58:11 -- common/autotest_common.sh@926 -- # '[' -z 1974981 ']' 00:20:18.267 11:58:11 -- common/autotest_common.sh@930 -- # kill -0 1974981 00:20:18.267 11:58:11 -- common/autotest_common.sh@931 -- # uname 00:20:18.267 11:58:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.267 11:58:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1974981 00:20:18.267 11:58:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:18.267 11:58:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:18.267 11:58:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1974981' 00:20:18.267 killing process with pid 1974981 00:20:18.267 11:58:11 -- common/autotest_common.sh@945 -- # kill 1974981 00:20:18.267 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.267 00:20:18.267 Latency(us) 00:20:18.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.267 =================================================================================================================== 00:20:18.267 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.267 11:58:11 -- common/autotest_common.sh@950 -- # wait 1974981 00:20:18.267 11:58:12 -- target/tls.sh@37 -- # return 1 00:20:18.267 11:58:12 -- common/autotest_common.sh@643 -- # es=1 00:20:18.267 11:58:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:18.267 11:58:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:18.267 11:58:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:18.267 11:58:12 -- target/tls.sh@167 -- # killprocess 1969392 00:20:18.267 11:58:12 -- common/autotest_common.sh@926 -- # '[' -z 1969392 ']' 00:20:18.267 11:58:12 -- common/autotest_common.sh@930 -- # kill -0 1969392 00:20:18.267 11:58:12 -- common/autotest_common.sh@931 -- # uname 00:20:18.527 11:58:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.527 11:58:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1969392 00:20:18.527 11:58:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:18.527 11:58:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:18.527 11:58:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1969392' 00:20:18.527 killing process with pid 1969392 00:20:18.527 11:58:12 -- common/autotest_common.sh@945 -- # kill 1969392 00:20:18.527 11:58:12 -- common/autotest_common.sh@950 -- # wait 1969392 00:20:18.527 11:58:12 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:18.527 11:58:12 -- target/tls.sh@49 -- # local key hash crc 00:20:18.527 11:58:12 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:18.527 11:58:12 -- target/tls.sh@51 -- # hash=02 00:20:18.527 11:58:12 -- target/tls.sh@52 -- # tail -c8 00:20:18.527 11:58:12 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:18.527 11:58:12 -- target/tls.sh@52 -- # gzip -1 -c 00:20:18.527 11:58:12 -- target/tls.sh@52 -- # head -c 4 00:20:18.527 11:58:12 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:18.527 11:58:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:18.527 11:58:12 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:18.527 11:58:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:18.527 11:58:12 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:18.527 11:58:12 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:18.527 11:58:12 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:18.527 11:58:12 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:18.527 11:58:12 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:18.527 11:58:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:18.527 11:58:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:18.527 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:20:18.527 11:58:12 -- nvmf/common.sh@469 -- # nvmfpid=1975317 00:20:18.527 11:58:12 -- nvmf/common.sh@470 -- # waitforlisten 1975317 00:20:18.527 11:58:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:18.527 11:58:12 -- common/autotest_common.sh@819 -- # '[' -z 1975317 ']' 00:20:18.527 11:58:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.527 11:58:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.527 11:58:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.527 11:58:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.527 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:20:18.527 [2024-06-10 11:58:12.288218] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:18.527 [2024-06-10 11:58:12.288282] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.787 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.787 [2024-06-10 11:58:12.370813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.787 [2024-06-10 11:58:12.422249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:18.787 [2024-06-10 11:58:12.422346] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.787 [2024-06-10 11:58:12.422352] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.787 [2024-06-10 11:58:12.422357] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.787 [2024-06-10 11:58:12.422377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.359 11:58:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.359 11:58:13 -- common/autotest_common.sh@852 -- # return 0 00:20:19.359 11:58:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:19.359 11:58:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:19.359 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:20:19.359 11:58:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.359 11:58:13 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:19.359 11:58:13 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:19.359 11:58:13 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:19.619 [2024-06-10 11:58:13.244453] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.619 11:58:13 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:19.880 11:58:13 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:19.880 [2024-06-10 11:58:13.529146] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.880 [2024-06-10 11:58:13.529311] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.880 11:58:13 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:20.141 malloc0 00:20:20.141 11:58:13 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:20.141 11:58:13 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:20.402 11:58:13 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:20.402 11:58:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.402 11:58:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.402 11:58:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.402 11:58:13 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:20.402 11:58:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.402 11:58:13 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.402 11:58:13 -- target/tls.sh@28 -- # bdevperf_pid=1975678 00:20:20.402 11:58:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.402 11:58:13 -- target/tls.sh@31 -- # waitforlisten 1975678 /var/tmp/bdevperf.sock 00:20:20.402 11:58:13 -- common/autotest_common.sh@819 -- # '[' -z 1975678 ']' 00:20:20.402 11:58:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.402 11:58:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:20.402 11:58:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.402 11:58:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:20.402 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.402 [2024-06-10 11:58:13.983629] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:20.403 [2024-06-10 11:58:13.983679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975678 ] 00:20:20.403 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.403 [2024-06-10 11:58:14.038408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.403 [2024-06-10 11:58:14.089125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.346 11:58:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:21.346 11:58:14 -- common/autotest_common.sh@852 -- # return 0 00:20:21.346 11:58:14 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:21.346 [2024-06-10 11:58:14.898233] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.346 TLSTESTn1 00:20:21.346 11:58:14 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:21.346 Running I/O for 10 seconds... 00:20:33.578 00:20:33.578 Latency(us) 00:20:33.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.578 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:33.578 Verification LBA range: start 0x0 length 0x2000 00:20:33.578 TLSTESTn1 : 10.04 2719.98 10.62 0.00 0.00 46970.77 5515.95 58545.49 00:20:33.578 =================================================================================================================== 00:20:33.578 Total : 2719.98 10.62 0.00 0.00 46970.77 5515.95 58545.49 00:20:33.578 0 00:20:33.578 11:58:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.578 11:58:25 -- target/tls.sh@45 -- # killprocess 1975678 00:20:33.578 11:58:25 -- common/autotest_common.sh@926 -- # '[' -z 1975678 ']' 00:20:33.578 11:58:25 -- common/autotest_common.sh@930 -- # kill -0 1975678 00:20:33.578 11:58:25 -- common/autotest_common.sh@931 -- # uname 00:20:33.578 11:58:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:33.578 11:58:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1975678 00:20:33.578 11:58:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:33.578 11:58:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:33.578 11:58:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1975678' 00:20:33.578 killing process with pid 1975678 00:20:33.578 11:58:25 -- common/autotest_common.sh@945 -- # kill 1975678 00:20:33.578 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.578 00:20:33.578 Latency(us) 00:20:33.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.578 =================================================================================================================== 00:20:33.578 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.578 11:58:25 -- common/autotest_common.sh@950 -- # wait 1975678 00:20:33.578 11:58:25 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.578 11:58:25 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.578 11:58:25 -- common/autotest_common.sh@640 -- # local es=0 00:20:33.578 11:58:25 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.578 11:58:25 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:33.578 11:58:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:33.578 11:58:25 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:33.578 11:58:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:33.578 11:58:25 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.578 11:58:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:33.578 11:58:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:33.578 11:58:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:33.578 11:58:25 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:33.579 11:58:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.579 11:58:25 -- target/tls.sh@28 -- # bdevperf_pid=1978037 00:20:33.579 11:58:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.579 11:58:25 -- target/tls.sh@31 -- # waitforlisten 1978037 /var/tmp/bdevperf.sock 00:20:33.579 11:58:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:33.579 11:58:25 -- common/autotest_common.sh@819 -- # '[' -z 1978037 ']' 00:20:33.579 11:58:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.579 11:58:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:33.579 11:58:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.579 11:58:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:33.579 11:58:25 -- common/autotest_common.sh@10 -- # set +x 00:20:33.579 [2024-06-10 11:58:25.377558] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:33.579 [2024-06-10 11:58:25.377619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978037 ] 00:20:33.579 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.579 [2024-06-10 11:58:25.428296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.579 [2024-06-10 11:58:25.478128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.579 11:58:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:33.579 11:58:26 -- common/autotest_common.sh@852 -- # return 0 00:20:33.579 11:58:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.579 [2024-06-10 11:58:26.279508] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.579 [2024-06-10 11:58:26.279543] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:33.579 request: 00:20:33.579 { 00:20:33.579 "name": "TLSTEST", 00:20:33.579 "trtype": "tcp", 00:20:33.579 "traddr": "10.0.0.2", 00:20:33.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.579 "adrfam": "ipv4", 00:20:33.579 "trsvcid": "4420", 00:20:33.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.579 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:33.579 "method": "bdev_nvme_attach_controller", 00:20:33.579 "req_id": 1 00:20:33.579 } 00:20:33.579 Got JSON-RPC error response 00:20:33.579 response: 00:20:33.579 { 00:20:33.579 "code": -22, 00:20:33.579 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:33.579 } 00:20:33.579 11:58:26 -- target/tls.sh@36 -- # killprocess 1978037 00:20:33.579 11:58:26 -- common/autotest_common.sh@926 -- # '[' -z 1978037 ']' 00:20:33.579 11:58:26 -- common/autotest_common.sh@930 -- # kill -0 1978037 00:20:33.579 11:58:26 -- common/autotest_common.sh@931 -- # uname 00:20:33.579 11:58:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:33.579 11:58:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1978037 00:20:33.579 11:58:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:33.579 11:58:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:33.579 11:58:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1978037' 00:20:33.579 killing process with pid 1978037 00:20:33.579 11:58:26 -- common/autotest_common.sh@945 -- # kill 1978037 00:20:33.579 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.579 00:20:33.579 Latency(us) 00:20:33.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.579 =================================================================================================================== 00:20:33.579 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:33.579 11:58:26 -- common/autotest_common.sh@950 -- # wait 1978037 00:20:33.579 11:58:26 -- target/tls.sh@37 -- # return 1 00:20:33.579 11:58:26 -- common/autotest_common.sh@643 -- # es=1 00:20:33.579 11:58:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:33.579 11:58:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:33.579 11:58:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:33.579 11:58:26 -- target/tls.sh@183 -- # killprocess 1975317 00:20:33.579 11:58:26 -- common/autotest_common.sh@926 -- # '[' -z 1975317 ']' 00:20:33.579 11:58:26 -- common/autotest_common.sh@930 -- # kill -0 1975317 00:20:33.579 11:58:26 -- common/autotest_common.sh@931 -- # uname 00:20:33.579 11:58:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:33.579 11:58:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1975317 00:20:33.579 11:58:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:33.579 11:58:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:33.579 11:58:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1975317' 00:20:33.579 killing process with pid 1975317 00:20:33.579 11:58:26 -- common/autotest_common.sh@945 -- # kill 1975317 00:20:33.579 11:58:26 -- common/autotest_common.sh@950 -- # wait 1975317 00:20:33.579 11:58:26 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:33.579 11:58:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:33.579 11:58:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:33.579 11:58:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.579 11:58:26 -- nvmf/common.sh@469 -- # nvmfpid=1978181 00:20:33.579 11:58:26 -- nvmf/common.sh@470 -- # waitforlisten 1978181 00:20:33.579 11:58:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:33.579 11:58:26 -- common/autotest_common.sh@819 -- # '[' -z 1978181 ']' 00:20:33.579 11:58:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.579 11:58:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:33.579 11:58:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.579 11:58:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:33.579 11:58:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.579 [2024-06-10 11:58:26.702671] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:33.579 [2024-06-10 11:58:26.702751] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.579 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.579 [2024-06-10 11:58:26.788157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.579 [2024-06-10 11:58:26.846592] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:33.579 [2024-06-10 11:58:26.846691] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.579 [2024-06-10 11:58:26.846697] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.579 [2024-06-10 11:58:26.846703] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.579 [2024-06-10 11:58:26.846719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.841 11:58:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:33.841 11:58:27 -- common/autotest_common.sh@852 -- # return 0 00:20:33.841 11:58:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:33.841 11:58:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:33.841 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:20:33.841 11:58:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.841 11:58:27 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.841 11:58:27 -- common/autotest_common.sh@640 -- # local es=0 00:20:33.841 11:58:27 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.841 11:58:27 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:33.841 11:58:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:33.841 11:58:27 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:33.841 11:58:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:33.841 11:58:27 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.841 11:58:27 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.841 11:58:27 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:34.101 [2024-06-10 11:58:27.634475] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.101 11:58:27 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.101 11:58:27 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.399 [2024-06-10 11:58:27.931209] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.399 [2024-06-10 11:58:27.931378] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.399 11:58:27 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.399 malloc0 00:20:34.399 11:58:28 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:34.692 11:58:28 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:34.692 [2024-06-10 11:58:28.366039] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:34.692 [2024-06-10 11:58:28.366058] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:34.692 [2024-06-10 11:58:28.366070] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:34.692 request: 00:20:34.692 { 00:20:34.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.692 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.692 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:34.692 "method": "nvmf_subsystem_add_host", 00:20:34.692 "req_id": 1 00:20:34.692 } 00:20:34.692 Got JSON-RPC error response 00:20:34.692 response: 00:20:34.692 { 00:20:34.692 "code": -32603, 00:20:34.692 "message": "Internal error" 00:20:34.692 } 00:20:34.692 11:58:28 -- common/autotest_common.sh@643 -- # es=1 00:20:34.692 11:58:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:34.692 11:58:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:34.692 11:58:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:34.692 11:58:28 -- target/tls.sh@189 -- # killprocess 1978181 00:20:34.692 11:58:28 -- common/autotest_common.sh@926 -- # '[' -z 1978181 ']' 00:20:34.692 11:58:28 -- common/autotest_common.sh@930 -- # kill -0 1978181 00:20:34.692 11:58:28 -- common/autotest_common.sh@931 -- # uname 00:20:34.692 11:58:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.692 11:58:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1978181 00:20:34.692 11:58:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:34.692 11:58:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:34.692 11:58:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1978181' 00:20:34.692 killing process with pid 1978181 00:20:34.692 11:58:28 -- common/autotest_common.sh@945 -- # kill 1978181 00:20:34.692 11:58:28 -- common/autotest_common.sh@950 -- # wait 1978181 00:20:34.953 11:58:28 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:34.953 11:58:28 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:34.953 11:58:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:34.953 11:58:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:34.953 11:58:28 -- common/autotest_common.sh@10 -- # set +x 00:20:34.953 11:58:28 -- nvmf/common.sh@469 -- # nvmfpid=1978655 00:20:34.953 11:58:28 -- nvmf/common.sh@470 -- # waitforlisten 1978655 00:20:34.953 11:58:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:34.953 11:58:28 -- common/autotest_common.sh@819 -- # '[' -z 1978655 ']' 00:20:34.953 11:58:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.953 11:58:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:34.953 11:58:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.953 11:58:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:34.953 11:58:28 -- common/autotest_common.sh@10 -- # set +x 00:20:34.953 [2024-06-10 11:58:28.613563] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:34.953 [2024-06-10 11:58:28.613619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.953 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.953 [2024-06-10 11:58:28.694755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.214 [2024-06-10 11:58:28.747408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:35.214 [2024-06-10 11:58:28.747500] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.214 [2024-06-10 11:58:28.747506] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.214 [2024-06-10 11:58:28.747511] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.214 [2024-06-10 11:58:28.747524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.785 11:58:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:35.785 11:58:29 -- common/autotest_common.sh@852 -- # return 0 00:20:35.785 11:58:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:35.785 11:58:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:35.785 11:58:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.785 11:58:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.785 11:58:29 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:35.785 11:58:29 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:35.785 11:58:29 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:35.785 [2024-06-10 11:58:29.541656] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.045 11:58:29 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:36.045 11:58:29 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:36.306 [2024-06-10 11:58:29.826351] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.306 [2024-06-10 11:58:29.826508] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.306 11:58:29 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:36.306 malloc0 00:20:36.306 11:58:29 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.568 11:58:30 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:36.568 11:58:30 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.568 11:58:30 -- target/tls.sh@197 -- # bdevperf_pid=1979010 00:20:36.568 11:58:30 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.568 11:58:30 -- target/tls.sh@200 -- # waitforlisten 1979010 /var/tmp/bdevperf.sock 00:20:36.568 11:58:30 -- common/autotest_common.sh@819 -- # '[' -z 1979010 ']' 00:20:36.568 11:58:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.568 11:58:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:36.568 11:58:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.568 11:58:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:36.568 11:58:30 -- common/autotest_common.sh@10 -- # set +x 00:20:36.568 [2024-06-10 11:58:30.284091] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:36.568 [2024-06-10 11:58:30.284150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979010 ] 00:20:36.568 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.828 [2024-06-10 11:58:30.342204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.828 [2024-06-10 11:58:30.392626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.400 11:58:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.400 11:58:31 -- common/autotest_common.sh@852 -- # return 0 00:20:37.400 11:58:31 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.661 [2024-06-10 11:58:31.202338] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.661 TLSTESTn1 00:20:37.661 11:58:31 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:37.922 11:58:31 -- target/tls.sh@205 -- # tgtconf='{ 00:20:37.922 "subsystems": [ 00:20:37.922 { 00:20:37.922 "subsystem": "iobuf", 00:20:37.922 "config": [ 00:20:37.922 { 00:20:37.922 "method": "iobuf_set_options", 00:20:37.922 "params": { 00:20:37.922 "small_pool_count": 8192, 00:20:37.922 "large_pool_count": 1024, 00:20:37.922 "small_bufsize": 8192, 00:20:37.922 "large_bufsize": 135168 00:20:37.922 } 00:20:37.922 } 00:20:37.922 ] 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "subsystem": "sock", 00:20:37.922 "config": [ 00:20:37.922 { 00:20:37.922 "method": "sock_impl_set_options", 00:20:37.922 "params": { 00:20:37.922 "impl_name": "posix", 00:20:37.922 "recv_buf_size": 2097152, 00:20:37.922 "send_buf_size": 2097152, 00:20:37.922 "enable_recv_pipe": true, 00:20:37.922 "enable_quickack": false, 00:20:37.922 "enable_placement_id": 0, 00:20:37.922 "enable_zerocopy_send_server": true, 00:20:37.922 "enable_zerocopy_send_client": false, 00:20:37.922 "zerocopy_threshold": 0, 00:20:37.922 "tls_version": 0, 00:20:37.922 "enable_ktls": false 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "sock_impl_set_options", 00:20:37.922 "params": { 00:20:37.922 "impl_name": "ssl", 00:20:37.922 "recv_buf_size": 4096, 00:20:37.922 "send_buf_size": 4096, 00:20:37.922 "enable_recv_pipe": true, 00:20:37.922 "enable_quickack": false, 00:20:37.922 "enable_placement_id": 0, 00:20:37.922 "enable_zerocopy_send_server": true, 00:20:37.922 "enable_zerocopy_send_client": false, 00:20:37.922 "zerocopy_threshold": 0, 00:20:37.922 "tls_version": 0, 00:20:37.922 "enable_ktls": false 00:20:37.922 } 00:20:37.922 } 00:20:37.922 ] 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "subsystem": "vmd", 00:20:37.922 "config": [] 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "subsystem": "accel", 00:20:37.922 "config": [ 00:20:37.922 { 00:20:37.922 "method": "accel_set_options", 00:20:37.922 "params": { 00:20:37.922 "small_cache_size": 128, 00:20:37.922 "large_cache_size": 16, 00:20:37.922 "task_count": 2048, 00:20:37.922 "sequence_count": 2048, 00:20:37.922 "buf_count": 2048 00:20:37.922 } 00:20:37.922 } 00:20:37.922 ] 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "subsystem": "bdev", 00:20:37.922 "config": [ 00:20:37.922 { 00:20:37.922 "method": "bdev_set_options", 00:20:37.922 "params": { 00:20:37.922 "bdev_io_pool_size": 65535, 00:20:37.922 "bdev_io_cache_size": 256, 00:20:37.922 "bdev_auto_examine": true, 00:20:37.922 "iobuf_small_cache_size": 128, 00:20:37.922 "iobuf_large_cache_size": 16 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "bdev_raid_set_options", 00:20:37.922 "params": { 00:20:37.922 "process_window_size_kb": 1024 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "bdev_iscsi_set_options", 00:20:37.922 "params": { 00:20:37.922 "timeout_sec": 30 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "bdev_nvme_set_options", 00:20:37.922 "params": { 00:20:37.922 "action_on_timeout": "none", 00:20:37.922 "timeout_us": 0, 00:20:37.922 "timeout_admin_us": 0, 00:20:37.922 "keep_alive_timeout_ms": 10000, 00:20:37.922 "transport_retry_count": 4, 00:20:37.922 "arbitration_burst": 0, 00:20:37.922 "low_priority_weight": 0, 00:20:37.922 "medium_priority_weight": 0, 00:20:37.922 "high_priority_weight": 0, 00:20:37.922 "nvme_adminq_poll_period_us": 10000, 00:20:37.922 "nvme_ioq_poll_period_us": 0, 00:20:37.922 "io_queue_requests": 0, 00:20:37.922 "delay_cmd_submit": true, 00:20:37.922 "bdev_retry_count": 3, 00:20:37.922 "transport_ack_timeout": 0, 00:20:37.922 "ctrlr_loss_timeout_sec": 0, 00:20:37.922 "reconnect_delay_sec": 0, 00:20:37.922 "fast_io_fail_timeout_sec": 0, 00:20:37.922 "generate_uuids": false, 00:20:37.922 "transport_tos": 0, 00:20:37.922 "io_path_stat": false, 00:20:37.922 "allow_accel_sequence": false 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "bdev_nvme_set_hotplug", 00:20:37.922 "params": { 00:20:37.922 "period_us": 100000, 00:20:37.922 "enable": false 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "bdev_malloc_create", 00:20:37.922 "params": { 00:20:37.922 "name": "malloc0", 00:20:37.922 "num_blocks": 8192, 00:20:37.922 "block_size": 4096, 00:20:37.922 "physical_block_size": 4096, 00:20:37.922 "uuid": "ea829fe3-c6df-4341-bfa7-fffebe8f4d50", 00:20:37.922 "optimal_io_boundary": 0 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "bdev_wait_for_examine" 00:20:37.922 } 00:20:37.922 ] 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "subsystem": "nbd", 00:20:37.922 "config": [] 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "subsystem": "scheduler", 00:20:37.922 "config": [ 00:20:37.922 { 00:20:37.922 "method": "framework_set_scheduler", 00:20:37.922 "params": { 00:20:37.922 "name": "static" 00:20:37.922 } 00:20:37.922 } 00:20:37.922 ] 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "subsystem": "nvmf", 00:20:37.922 "config": [ 00:20:37.922 { 00:20:37.922 "method": "nvmf_set_config", 00:20:37.922 "params": { 00:20:37.922 "discovery_filter": "match_any", 00:20:37.922 "admin_cmd_passthru": { 00:20:37.922 "identify_ctrlr": false 00:20:37.922 } 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "nvmf_set_max_subsystems", 00:20:37.922 "params": { 00:20:37.922 "max_subsystems": 1024 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "nvmf_set_crdt", 00:20:37.922 "params": { 00:20:37.922 "crdt1": 0, 00:20:37.922 "crdt2": 0, 00:20:37.922 "crdt3": 0 00:20:37.922 } 00:20:37.922 }, 00:20:37.922 { 00:20:37.922 "method": "nvmf_create_transport", 00:20:37.922 "params": { 00:20:37.922 "trtype": "TCP", 00:20:37.922 "max_queue_depth": 128, 00:20:37.922 "max_io_qpairs_per_ctrlr": 127, 00:20:37.922 "in_capsule_data_size": 4096, 00:20:37.923 "max_io_size": 131072, 00:20:37.923 "io_unit_size": 131072, 00:20:37.923 "max_aq_depth": 128, 00:20:37.923 "num_shared_buffers": 511, 00:20:37.923 "buf_cache_size": 4294967295, 00:20:37.923 "dif_insert_or_strip": false, 00:20:37.923 "zcopy": false, 00:20:37.923 "c2h_success": false, 00:20:37.923 "sock_priority": 0, 00:20:37.923 "abort_timeout_sec": 1 00:20:37.923 } 00:20:37.923 }, 00:20:37.923 { 00:20:37.923 "method": "nvmf_create_subsystem", 00:20:37.923 "params": { 00:20:37.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.923 "allow_any_host": false, 00:20:37.923 "serial_number": "SPDK00000000000001", 00:20:37.923 "model_number": "SPDK bdev Controller", 00:20:37.923 "max_namespaces": 10, 00:20:37.923 "min_cntlid": 1, 00:20:37.923 "max_cntlid": 65519, 00:20:37.923 "ana_reporting": false 00:20:37.923 } 00:20:37.923 }, 00:20:37.923 { 00:20:37.923 "method": "nvmf_subsystem_add_host", 00:20:37.923 "params": { 00:20:37.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.923 "host": "nqn.2016-06.io.spdk:host1", 00:20:37.923 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:37.923 } 00:20:37.923 }, 00:20:37.923 { 00:20:37.923 "method": "nvmf_subsystem_add_ns", 00:20:37.923 "params": { 00:20:37.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.923 "namespace": { 00:20:37.923 "nsid": 1, 00:20:37.923 "bdev_name": "malloc0", 00:20:37.923 "nguid": "EA829FE3C6DF4341BFA7FFFEBE8F4D50", 00:20:37.923 "uuid": "ea829fe3-c6df-4341-bfa7-fffebe8f4d50" 00:20:37.923 } 00:20:37.923 } 00:20:37.923 }, 00:20:37.923 { 00:20:37.923 "method": "nvmf_subsystem_add_listener", 00:20:37.923 "params": { 00:20:37.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.923 "listen_address": { 00:20:37.923 "trtype": "TCP", 00:20:37.923 "adrfam": "IPv4", 00:20:37.923 "traddr": "10.0.0.2", 00:20:37.923 "trsvcid": "4420" 00:20:37.923 }, 00:20:37.923 "secure_channel": true 00:20:37.923 } 00:20:37.923 } 00:20:37.923 ] 00:20:37.923 } 00:20:37.923 ] 00:20:37.923 }' 00:20:37.923 11:58:31 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:38.184 11:58:31 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:38.184 "subsystems": [ 00:20:38.184 { 00:20:38.184 "subsystem": "iobuf", 00:20:38.184 "config": [ 00:20:38.184 { 00:20:38.184 "method": "iobuf_set_options", 00:20:38.184 "params": { 00:20:38.184 "small_pool_count": 8192, 00:20:38.184 "large_pool_count": 1024, 00:20:38.184 "small_bufsize": 8192, 00:20:38.184 "large_bufsize": 135168 00:20:38.184 } 00:20:38.184 } 00:20:38.184 ] 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "subsystem": "sock", 00:20:38.184 "config": [ 00:20:38.184 { 00:20:38.184 "method": "sock_impl_set_options", 00:20:38.184 "params": { 00:20:38.184 "impl_name": "posix", 00:20:38.184 "recv_buf_size": 2097152, 00:20:38.184 "send_buf_size": 2097152, 00:20:38.184 "enable_recv_pipe": true, 00:20:38.184 "enable_quickack": false, 00:20:38.184 "enable_placement_id": 0, 00:20:38.184 "enable_zerocopy_send_server": true, 00:20:38.184 "enable_zerocopy_send_client": false, 00:20:38.184 "zerocopy_threshold": 0, 00:20:38.184 "tls_version": 0, 00:20:38.184 "enable_ktls": false 00:20:38.184 } 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "method": "sock_impl_set_options", 00:20:38.184 "params": { 00:20:38.184 "impl_name": "ssl", 00:20:38.184 "recv_buf_size": 4096, 00:20:38.184 "send_buf_size": 4096, 00:20:38.184 "enable_recv_pipe": true, 00:20:38.184 "enable_quickack": false, 00:20:38.184 "enable_placement_id": 0, 00:20:38.184 "enable_zerocopy_send_server": true, 00:20:38.184 "enable_zerocopy_send_client": false, 00:20:38.184 "zerocopy_threshold": 0, 00:20:38.184 "tls_version": 0, 00:20:38.184 "enable_ktls": false 00:20:38.184 } 00:20:38.184 } 00:20:38.184 ] 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "subsystem": "vmd", 00:20:38.184 "config": [] 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "subsystem": "accel", 00:20:38.184 "config": [ 00:20:38.184 { 00:20:38.184 "method": "accel_set_options", 00:20:38.184 "params": { 00:20:38.184 "small_cache_size": 128, 00:20:38.184 "large_cache_size": 16, 00:20:38.184 "task_count": 2048, 00:20:38.184 "sequence_count": 2048, 00:20:38.184 "buf_count": 2048 00:20:38.184 } 00:20:38.184 } 00:20:38.184 ] 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "subsystem": "bdev", 00:20:38.184 "config": [ 00:20:38.184 { 00:20:38.184 "method": "bdev_set_options", 00:20:38.184 "params": { 00:20:38.184 "bdev_io_pool_size": 65535, 00:20:38.184 "bdev_io_cache_size": 256, 00:20:38.184 "bdev_auto_examine": true, 00:20:38.184 "iobuf_small_cache_size": 128, 00:20:38.184 "iobuf_large_cache_size": 16 00:20:38.184 } 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "method": "bdev_raid_set_options", 00:20:38.184 "params": { 00:20:38.184 "process_window_size_kb": 1024 00:20:38.184 } 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "method": "bdev_iscsi_set_options", 00:20:38.184 "params": { 00:20:38.184 "timeout_sec": 30 00:20:38.184 } 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "method": "bdev_nvme_set_options", 00:20:38.184 "params": { 00:20:38.184 "action_on_timeout": "none", 00:20:38.184 "timeout_us": 0, 00:20:38.184 "timeout_admin_us": 0, 00:20:38.184 "keep_alive_timeout_ms": 10000, 00:20:38.184 "transport_retry_count": 4, 00:20:38.184 "arbitration_burst": 0, 00:20:38.184 "low_priority_weight": 0, 00:20:38.184 "medium_priority_weight": 0, 00:20:38.184 "high_priority_weight": 0, 00:20:38.184 "nvme_adminq_poll_period_us": 10000, 00:20:38.184 "nvme_ioq_poll_period_us": 0, 00:20:38.184 "io_queue_requests": 512, 00:20:38.184 "delay_cmd_submit": true, 00:20:38.184 "bdev_retry_count": 3, 00:20:38.184 "transport_ack_timeout": 0, 00:20:38.184 "ctrlr_loss_timeout_sec": 0, 00:20:38.184 "reconnect_delay_sec": 0, 00:20:38.184 "fast_io_fail_timeout_sec": 0, 00:20:38.184 "generate_uuids": false, 00:20:38.185 "transport_tos": 0, 00:20:38.185 "io_path_stat": false, 00:20:38.185 "allow_accel_sequence": false 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "bdev_nvme_attach_controller", 00:20:38.185 "params": { 00:20:38.185 "name": "TLSTEST", 00:20:38.185 "trtype": "TCP", 00:20:38.185 "adrfam": "IPv4", 00:20:38.185 "traddr": "10.0.0.2", 00:20:38.185 "trsvcid": "4420", 00:20:38.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.185 "prchk_reftag": false, 00:20:38.185 "prchk_guard": false, 00:20:38.185 "ctrlr_loss_timeout_sec": 0, 00:20:38.185 "reconnect_delay_sec": 0, 00:20:38.185 "fast_io_fail_timeout_sec": 0, 00:20:38.185 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:38.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.185 "hdgst": false, 00:20:38.185 "ddgst": false 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "bdev_nvme_set_hotplug", 00:20:38.185 "params": { 00:20:38.185 "period_us": 100000, 00:20:38.185 "enable": false 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "bdev_wait_for_examine" 00:20:38.185 } 00:20:38.185 ] 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "subsystem": "nbd", 00:20:38.185 "config": [] 00:20:38.185 } 00:20:38.185 ] 00:20:38.185 }' 00:20:38.185 11:58:31 -- target/tls.sh@208 -- # killprocess 1979010 00:20:38.185 11:58:31 -- common/autotest_common.sh@926 -- # '[' -z 1979010 ']' 00:20:38.185 11:58:31 -- common/autotest_common.sh@930 -- # kill -0 1979010 00:20:38.185 11:58:31 -- common/autotest_common.sh@931 -- # uname 00:20:38.185 11:58:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.185 11:58:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1979010 00:20:38.185 11:58:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:38.185 11:58:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:38.185 11:58:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1979010' 00:20:38.185 killing process with pid 1979010 00:20:38.185 11:58:31 -- common/autotest_common.sh@945 -- # kill 1979010 00:20:38.185 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.185 00:20:38.185 Latency(us) 00:20:38.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.185 =================================================================================================================== 00:20:38.185 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.185 11:58:31 -- common/autotest_common.sh@950 -- # wait 1979010 00:20:38.185 11:58:31 -- target/tls.sh@209 -- # killprocess 1978655 00:20:38.185 11:58:31 -- common/autotest_common.sh@926 -- # '[' -z 1978655 ']' 00:20:38.185 11:58:31 -- common/autotest_common.sh@930 -- # kill -0 1978655 00:20:38.185 11:58:31 -- common/autotest_common.sh@931 -- # uname 00:20:38.185 11:58:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.185 11:58:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1978655 00:20:38.446 11:58:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:38.446 11:58:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:38.446 11:58:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1978655' 00:20:38.446 killing process with pid 1978655 00:20:38.446 11:58:31 -- common/autotest_common.sh@945 -- # kill 1978655 00:20:38.446 11:58:31 -- common/autotest_common.sh@950 -- # wait 1978655 00:20:38.446 11:58:32 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:38.446 11:58:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:38.446 11:58:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:38.446 11:58:32 -- common/autotest_common.sh@10 -- # set +x 00:20:38.446 11:58:32 -- target/tls.sh@212 -- # echo '{ 00:20:38.446 "subsystems": [ 00:20:38.446 { 00:20:38.446 "subsystem": "iobuf", 00:20:38.446 "config": [ 00:20:38.446 { 00:20:38.446 "method": "iobuf_set_options", 00:20:38.446 "params": { 00:20:38.446 "small_pool_count": 8192, 00:20:38.446 "large_pool_count": 1024, 00:20:38.446 "small_bufsize": 8192, 00:20:38.446 "large_bufsize": 135168 00:20:38.446 } 00:20:38.446 } 00:20:38.446 ] 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "subsystem": "sock", 00:20:38.446 "config": [ 00:20:38.446 { 00:20:38.446 "method": "sock_impl_set_options", 00:20:38.446 "params": { 00:20:38.446 "impl_name": "posix", 00:20:38.446 "recv_buf_size": 2097152, 00:20:38.446 "send_buf_size": 2097152, 00:20:38.446 "enable_recv_pipe": true, 00:20:38.446 "enable_quickack": false, 00:20:38.446 "enable_placement_id": 0, 00:20:38.446 "enable_zerocopy_send_server": true, 00:20:38.446 "enable_zerocopy_send_client": false, 00:20:38.446 "zerocopy_threshold": 0, 00:20:38.446 "tls_version": 0, 00:20:38.446 "enable_ktls": false 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "sock_impl_set_options", 00:20:38.446 "params": { 00:20:38.446 "impl_name": "ssl", 00:20:38.446 "recv_buf_size": 4096, 00:20:38.446 "send_buf_size": 4096, 00:20:38.446 "enable_recv_pipe": true, 00:20:38.446 "enable_quickack": false, 00:20:38.446 "enable_placement_id": 0, 00:20:38.446 "enable_zerocopy_send_server": true, 00:20:38.446 "enable_zerocopy_send_client": false, 00:20:38.446 "zerocopy_threshold": 0, 00:20:38.446 "tls_version": 0, 00:20:38.446 "enable_ktls": false 00:20:38.446 } 00:20:38.446 } 00:20:38.446 ] 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "subsystem": "vmd", 00:20:38.446 "config": [] 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "subsystem": "accel", 00:20:38.446 "config": [ 00:20:38.446 { 00:20:38.446 "method": "accel_set_options", 00:20:38.446 "params": { 00:20:38.446 "small_cache_size": 128, 00:20:38.446 "large_cache_size": 16, 00:20:38.446 "task_count": 2048, 00:20:38.446 "sequence_count": 2048, 00:20:38.446 "buf_count": 2048 00:20:38.446 } 00:20:38.446 } 00:20:38.446 ] 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "subsystem": "bdev", 00:20:38.446 "config": [ 00:20:38.446 { 00:20:38.446 "method": "bdev_set_options", 00:20:38.446 "params": { 00:20:38.446 "bdev_io_pool_size": 65535, 00:20:38.446 "bdev_io_cache_size": 256, 00:20:38.446 "bdev_auto_examine": true, 00:20:38.446 "iobuf_small_cache_size": 128, 00:20:38.446 "iobuf_large_cache_size": 16 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_raid_set_options", 00:20:38.446 "params": { 00:20:38.446 "process_window_size_kb": 1024 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_iscsi_set_options", 00:20:38.446 "params": { 00:20:38.446 "timeout_sec": 30 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_nvme_set_options", 00:20:38.447 "params": { 00:20:38.447 "action_on_timeout": "none", 00:20:38.447 "timeout_us": 0, 00:20:38.447 "timeout_admin_us": 0, 00:20:38.447 "keep_alive_timeout_ms": 10000, 00:20:38.447 "transport_retry_count": 4, 00:20:38.447 "arbitration_burst": 0, 00:20:38.447 "low_priority_weight": 0, 00:20:38.447 "medium_priority_weight": 0, 00:20:38.447 "high_priority_weight": 0, 00:20:38.447 "nvme_adminq_poll_period_us": 10000, 00:20:38.447 "nvme_ioq_poll_period_us": 0, 00:20:38.447 "io_queue_requests": 0, 00:20:38.447 "delay_cmd_submit": true, 00:20:38.447 "bdev_retry_count": 3, 00:20:38.447 "transport_ack_timeout": 0, 00:20:38.447 "ctrlr_loss_timeout_sec": 0, 00:20:38.447 "reconnect_delay_sec": 0, 00:20:38.447 "fast_io_fail_timeout_sec": 0, 00:20:38.447 "generate_uuids": false, 00:20:38.447 "transport_tos": 0, 00:20:38.447 "io_path_stat": false, 00:20:38.447 "allow_accel_sequence": false 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "bdev_nvme_set_hotplug", 00:20:38.447 "params": { 00:20:38.447 "period_us": 100000, 00:20:38.447 "enable": false 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "bdev_malloc_create", 00:20:38.447 "params": { 00:20:38.447 "name": "malloc0", 00:20:38.447 "num_blocks": 8192, 00:20:38.447 "block_size": 4096, 00:20:38.447 "physical_block_size": 4096, 00:20:38.447 "uuid": "ea829fe3-c6df-4341-bfa7-fffebe8f4d50", 00:20:38.447 "optimal_io_boundary": 0 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "bdev_wait_for_examine" 00:20:38.447 } 00:20:38.447 ] 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "subsystem": "nbd", 00:20:38.447 "config": [] 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "subsystem": "scheduler", 00:20:38.447 "config": [ 00:20:38.447 { 00:20:38.447 "method": "framework_set_scheduler", 00:20:38.447 "params": { 00:20:38.447 "name": "static" 00:20:38.447 } 00:20:38.447 } 00:20:38.447 ] 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "subsystem": "nvmf", 00:20:38.447 "config": [ 00:20:38.447 { 00:20:38.447 "method": "nvmf_set_config", 00:20:38.447 "params": { 00:20:38.447 "discovery_filter": "match_any", 00:20:38.447 "admin_cmd_passthru": { 00:20:38.447 "identify_ctrlr": false 00:20:38.447 } 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "nvmf_set_max_subsystems", 00:20:38.447 "params": { 00:20:38.447 "max_subsystems": 1024 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "nvmf_set_crdt", 00:20:38.447 "params": { 00:20:38.447 "crdt1": 0, 00:20:38.447 "crdt2": 0, 00:20:38.447 "crdt3": 0 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "nvmf_create_transport", 00:20:38.447 "params": { 00:20:38.447 "trtype": "TCP", 00:20:38.447 "max_queue_depth": 128, 00:20:38.447 "max_io_qpairs_per_ctrlr": 127, 00:20:38.447 "in_capsule_data_size": 4096, 00:20:38.447 "max_io_size": 131072, 00:20:38.447 "io_unit_size": 131072, 00:20:38.447 "max_aq_depth": 128, 00:20:38.447 "num_shared_buffers": 511, 00:20:38.447 "buf_cache_size": 4294967295, 00:20:38.447 "dif_insert_or_strip": false, 00:20:38.447 "zcopy": false, 00:20:38.447 "c2h_success": false, 00:20:38.447 "sock_priority": 0, 00:20:38.447 "abort_timeout_sec": 1 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "nvmf_create_subsystem", 00:20:38.447 "params": { 00:20:38.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.447 "allow_any_host": false, 00:20:38.447 "serial_number": "SPDK00000000000001", 00:20:38.447 "model_number": "SPDK bdev Controller", 00:20:38.447 "max_namespaces": 10, 00:20:38.447 "min_cntlid": 1, 00:20:38.447 "max_cntlid": 65519, 00:20:38.447 "ana_reporting": false 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "nvmf_subsystem_add_host", 00:20:38.447 "params": { 00:20:38.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.447 "host": "nqn.2016-06.io.spdk:host1", 00:20:38.447 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "nvmf_subsystem_add_ns", 00:20:38.447 "params": { 00:20:38.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.447 "namespace": { 00:20:38.447 "nsid": 1, 00:20:38.447 "bdev_name": "malloc0", 00:20:38.447 "nguid": "EA829FE3C6DF4341BFA7FFFEBE8F4D50", 00:20:38.447 "uuid": "ea829fe3-c6df-4341-bfa7-fffebe8f4d50" 00:20:38.447 } 00:20:38.447 } 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "method": "nvmf_subsystem_add_listener", 00:20:38.447 "params": { 00:20:38.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.447 "listen_address": { 00:20:38.447 "trtype": "TCP", 00:20:38.447 "adrfam": "IPv4", 00:20:38.447 "traddr": "10.0.0.2", 00:20:38.447 "trsvcid": "4420" 00:20:38.447 }, 00:20:38.447 "secure_channel": true 00:20:38.447 } 00:20:38.447 } 00:20:38.447 ] 00:20:38.447 } 00:20:38.447 ] 00:20:38.447 }' 00:20:38.447 11:58:32 -- nvmf/common.sh@469 -- # nvmfpid=1979433 00:20:38.447 11:58:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:38.447 11:58:32 -- nvmf/common.sh@470 -- # waitforlisten 1979433 00:20:38.447 11:58:32 -- common/autotest_common.sh@819 -- # '[' -z 1979433 ']' 00:20:38.447 11:58:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.447 11:58:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:38.447 11:58:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.447 11:58:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:38.447 11:58:32 -- common/autotest_common.sh@10 -- # set +x 00:20:38.447 [2024-06-10 11:58:32.145455] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:38.447 [2024-06-10 11:58:32.145509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.447 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.707 [2024-06-10 11:58:32.227447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.707 [2024-06-10 11:58:32.279863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:38.707 [2024-06-10 11:58:32.279958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.707 [2024-06-10 11:58:32.279964] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.707 [2024-06-10 11:58:32.279968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.707 [2024-06-10 11:58:32.279982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.708 [2024-06-10 11:58:32.455150] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.967 [2024-06-10 11:58:32.487185] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.967 [2024-06-10 11:58:32.487359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.228 11:58:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:39.228 11:58:32 -- common/autotest_common.sh@852 -- # return 0 00:20:39.228 11:58:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:39.228 11:58:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:39.228 11:58:32 -- common/autotest_common.sh@10 -- # set +x 00:20:39.228 11:58:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.228 11:58:32 -- target/tls.sh@216 -- # bdevperf_pid=1979527 00:20:39.228 11:58:32 -- target/tls.sh@217 -- # waitforlisten 1979527 /var/tmp/bdevperf.sock 00:20:39.228 11:58:32 -- common/autotest_common.sh@819 -- # '[' -z 1979527 ']' 00:20:39.228 11:58:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.228 11:58:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:39.228 11:58:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.228 11:58:32 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:39.228 11:58:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:39.228 11:58:32 -- common/autotest_common.sh@10 -- # set +x 00:20:39.228 11:58:32 -- target/tls.sh@213 -- # echo '{ 00:20:39.228 "subsystems": [ 00:20:39.228 { 00:20:39.228 "subsystem": "iobuf", 00:20:39.228 "config": [ 00:20:39.228 { 00:20:39.228 "method": "iobuf_set_options", 00:20:39.228 "params": { 00:20:39.228 "small_pool_count": 8192, 00:20:39.228 "large_pool_count": 1024, 00:20:39.228 "small_bufsize": 8192, 00:20:39.228 "large_bufsize": 135168 00:20:39.228 } 00:20:39.228 } 00:20:39.228 ] 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "subsystem": "sock", 00:20:39.228 "config": [ 00:20:39.228 { 00:20:39.228 "method": "sock_impl_set_options", 00:20:39.228 "params": { 00:20:39.228 "impl_name": "posix", 00:20:39.228 "recv_buf_size": 2097152, 00:20:39.228 "send_buf_size": 2097152, 00:20:39.228 "enable_recv_pipe": true, 00:20:39.228 "enable_quickack": false, 00:20:39.228 "enable_placement_id": 0, 00:20:39.228 "enable_zerocopy_send_server": true, 00:20:39.228 "enable_zerocopy_send_client": false, 00:20:39.228 "zerocopy_threshold": 0, 00:20:39.228 "tls_version": 0, 00:20:39.228 "enable_ktls": false 00:20:39.228 } 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "method": "sock_impl_set_options", 00:20:39.228 "params": { 00:20:39.228 "impl_name": "ssl", 00:20:39.228 "recv_buf_size": 4096, 00:20:39.228 "send_buf_size": 4096, 00:20:39.228 "enable_recv_pipe": true, 00:20:39.228 "enable_quickack": false, 00:20:39.228 "enable_placement_id": 0, 00:20:39.228 "enable_zerocopy_send_server": true, 00:20:39.228 "enable_zerocopy_send_client": false, 00:20:39.228 "zerocopy_threshold": 0, 00:20:39.228 "tls_version": 0, 00:20:39.228 "enable_ktls": false 00:20:39.228 } 00:20:39.228 } 00:20:39.228 ] 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "subsystem": "vmd", 00:20:39.228 "config": [] 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "subsystem": "accel", 00:20:39.228 "config": [ 00:20:39.228 { 00:20:39.228 "method": "accel_set_options", 00:20:39.228 "params": { 00:20:39.228 "small_cache_size": 128, 00:20:39.228 "large_cache_size": 16, 00:20:39.228 "task_count": 2048, 00:20:39.228 "sequence_count": 2048, 00:20:39.228 "buf_count": 2048 00:20:39.228 } 00:20:39.228 } 00:20:39.228 ] 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "subsystem": "bdev", 00:20:39.228 "config": [ 00:20:39.228 { 00:20:39.228 "method": "bdev_set_options", 00:20:39.228 "params": { 00:20:39.228 "bdev_io_pool_size": 65535, 00:20:39.228 "bdev_io_cache_size": 256, 00:20:39.228 "bdev_auto_examine": true, 00:20:39.228 "iobuf_small_cache_size": 128, 00:20:39.228 "iobuf_large_cache_size": 16 00:20:39.228 } 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "method": "bdev_raid_set_options", 00:20:39.228 "params": { 00:20:39.228 "process_window_size_kb": 1024 00:20:39.228 } 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "method": "bdev_iscsi_set_options", 00:20:39.228 "params": { 00:20:39.228 "timeout_sec": 30 00:20:39.228 } 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "method": "bdev_nvme_set_options", 00:20:39.228 "params": { 00:20:39.228 "action_on_timeout": "none", 00:20:39.228 "timeout_us": 0, 00:20:39.228 "timeout_admin_us": 0, 00:20:39.228 "keep_alive_timeout_ms": 10000, 00:20:39.228 "transport_retry_count": 4, 00:20:39.228 "arbitration_burst": 0, 00:20:39.228 "low_priority_weight": 0, 00:20:39.228 "medium_priority_weight": 0, 00:20:39.228 "high_priority_weight": 0, 00:20:39.228 "nvme_adminq_poll_period_us": 10000, 00:20:39.228 "nvme_ioq_poll_period_us": 0, 00:20:39.228 "io_queue_requests": 512, 00:20:39.228 "delay_cmd_submit": true, 00:20:39.228 "bdev_retry_count": 3, 00:20:39.228 "transport_ack_timeout": 0, 00:20:39.228 "ctrlr_loss_timeout_sec": 0, 00:20:39.228 "reconnect_delay_sec": 0, 00:20:39.228 "fast_io_fail_timeout_sec": 0, 00:20:39.228 "generate_uuids": false, 00:20:39.228 "transport_tos": 0, 00:20:39.228 "io_path_stat": false, 00:20:39.228 "allow_accel_sequence": false 00:20:39.228 } 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "method": "bdev_nvme_attach_controller", 00:20:39.228 "params": { 00:20:39.228 "name": "TLSTEST", 00:20:39.228 "trtype": "TCP", 00:20:39.228 "adrfam": "IPv4", 00:20:39.228 "traddr": "10.0.0.2", 00:20:39.228 "trsvcid": "4420", 00:20:39.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.228 "prchk_reftag": false, 00:20:39.228 "prchk_guard": false, 00:20:39.228 "ctrlr_loss_timeout_sec": 0, 00:20:39.228 "reconnect_delay_sec": 0, 00:20:39.228 "fast_io_fail_timeout_sec": 0, 00:20:39.228 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:39.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.228 "hdgst": false, 00:20:39.228 "ddgst": false 00:20:39.228 } 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "method": "bdev_nvme_set_hotplug", 00:20:39.228 "params": { 00:20:39.228 "period_us": 100000, 00:20:39.228 "enable": false 00:20:39.228 } 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "method": "bdev_wait_for_examine" 00:20:39.228 } 00:20:39.228 ] 00:20:39.228 }, 00:20:39.228 { 00:20:39.228 "subsystem": "nbd", 00:20:39.228 "config": [] 00:20:39.228 } 00:20:39.228 ] 00:20:39.228 }' 00:20:39.228 [2024-06-10 11:58:32.990943] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:39.228 [2024-06-10 11:58:32.991031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979527 ] 00:20:39.489 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.489 [2024-06-10 11:58:33.047144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.489 [2024-06-10 11:58:33.097410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.489 [2024-06-10 11:58:33.213146] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.060 11:58:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:40.060 11:58:33 -- common/autotest_common.sh@852 -- # return 0 00:20:40.060 11:58:33 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:40.060 Running I/O for 10 seconds... 00:20:52.293 00:20:52.293 Latency(us) 00:20:52.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.293 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:52.293 Verification LBA range: start 0x0 length 0x2000 00:20:52.293 TLSTESTn1 : 10.01 5115.68 19.98 0.00 0.00 24998.22 4915.20 56797.87 00:20:52.293 =================================================================================================================== 00:20:52.293 Total : 5115.68 19.98 0.00 0.00 24998.22 4915.20 56797.87 00:20:52.293 0 00:20:52.293 11:58:43 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:52.293 11:58:43 -- target/tls.sh@223 -- # killprocess 1979527 00:20:52.293 11:58:43 -- common/autotest_common.sh@926 -- # '[' -z 1979527 ']' 00:20:52.293 11:58:43 -- common/autotest_common.sh@930 -- # kill -0 1979527 00:20:52.293 11:58:43 -- common/autotest_common.sh@931 -- # uname 00:20:52.293 11:58:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:52.293 11:58:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1979527 00:20:52.293 11:58:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:52.293 11:58:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:52.293 11:58:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1979527' 00:20:52.293 killing process with pid 1979527 00:20:52.293 11:58:43 -- common/autotest_common.sh@945 -- # kill 1979527 00:20:52.293 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.293 00:20:52.293 Latency(us) 00:20:52.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.293 =================================================================================================================== 00:20:52.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.293 11:58:43 -- common/autotest_common.sh@950 -- # wait 1979527 00:20:52.293 11:58:44 -- target/tls.sh@224 -- # killprocess 1979433 00:20:52.293 11:58:44 -- common/autotest_common.sh@926 -- # '[' -z 1979433 ']' 00:20:52.293 11:58:44 -- common/autotest_common.sh@930 -- # kill -0 1979433 00:20:52.293 11:58:44 -- common/autotest_common.sh@931 -- # uname 00:20:52.293 11:58:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:52.293 11:58:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1979433 00:20:52.293 11:58:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:52.293 11:58:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:52.293 11:58:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1979433' 00:20:52.293 killing process with pid 1979433 00:20:52.293 11:58:44 -- common/autotest_common.sh@945 -- # kill 1979433 00:20:52.293 11:58:44 -- common/autotest_common.sh@950 -- # wait 1979433 00:20:52.293 11:58:44 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:52.293 11:58:44 -- target/tls.sh@227 -- # cleanup 00:20:52.293 11:58:44 -- target/tls.sh@15 -- # process_shm --id 0 00:20:52.293 11:58:44 -- common/autotest_common.sh@796 -- # type=--id 00:20:52.293 11:58:44 -- common/autotest_common.sh@797 -- # id=0 00:20:52.293 11:58:44 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:52.293 11:58:44 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:52.293 11:58:44 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:52.293 11:58:44 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:52.293 11:58:44 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:52.293 11:58:44 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:52.293 nvmf_trace.0 00:20:52.293 11:58:44 -- common/autotest_common.sh@811 -- # return 0 00:20:52.293 11:58:44 -- target/tls.sh@16 -- # killprocess 1979527 00:20:52.293 11:58:44 -- common/autotest_common.sh@926 -- # '[' -z 1979527 ']' 00:20:52.293 11:58:44 -- common/autotest_common.sh@930 -- # kill -0 1979527 00:20:52.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1979527) - No such process 00:20:52.293 11:58:44 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1979527 is not found' 00:20:52.293 Process with pid 1979527 is not found 00:20:52.293 11:58:44 -- target/tls.sh@17 -- # nvmftestfini 00:20:52.293 11:58:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:52.293 11:58:44 -- nvmf/common.sh@116 -- # sync 00:20:52.293 11:58:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:52.293 11:58:44 -- nvmf/common.sh@119 -- # set +e 00:20:52.293 11:58:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:52.293 11:58:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:52.293 rmmod nvme_tcp 00:20:52.293 rmmod nvme_fabrics 00:20:52.293 rmmod nvme_keyring 00:20:52.293 11:58:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:52.293 11:58:44 -- nvmf/common.sh@123 -- # set -e 00:20:52.293 11:58:44 -- nvmf/common.sh@124 -- # return 0 00:20:52.293 11:58:44 -- nvmf/common.sh@477 -- # '[' -n 1979433 ']' 00:20:52.293 11:58:44 -- nvmf/common.sh@478 -- # killprocess 1979433 00:20:52.293 11:58:44 -- common/autotest_common.sh@926 -- # '[' -z 1979433 ']' 00:20:52.293 11:58:44 -- common/autotest_common.sh@930 -- # kill -0 1979433 00:20:52.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1979433) - No such process 00:20:52.293 11:58:44 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1979433 is not found' 00:20:52.293 Process with pid 1979433 is not found 00:20:52.293 11:58:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:52.293 11:58:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:52.293 11:58:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:52.293 11:58:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.293 11:58:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:52.293 11:58:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.293 11:58:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.293 11:58:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.865 11:58:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:52.865 11:58:46 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:52.865 00:20:52.865 real 1m11.850s 00:20:52.865 user 1m44.783s 00:20:52.865 sys 0m26.824s 00:20:52.865 11:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.865 11:58:46 -- common/autotest_common.sh@10 -- # set +x 00:20:52.865 ************************************ 00:20:52.865 END TEST nvmf_tls 00:20:52.865 ************************************ 00:20:52.865 11:58:46 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:52.865 11:58:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:52.865 11:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:52.865 11:58:46 -- common/autotest_common.sh@10 -- # set +x 00:20:52.865 ************************************ 00:20:52.865 START TEST nvmf_fips 00:20:52.865 ************************************ 00:20:52.865 11:58:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:52.865 * Looking for test storage... 00:20:52.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:52.865 11:58:46 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.865 11:58:46 -- nvmf/common.sh@7 -- # uname -s 00:20:52.865 11:58:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.865 11:58:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.865 11:58:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.865 11:58:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.865 11:58:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.865 11:58:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.865 11:58:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.865 11:58:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.865 11:58:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.865 11:58:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.865 11:58:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:52.865 11:58:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:52.865 11:58:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.865 11:58:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.865 11:58:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.865 11:58:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:52.865 11:58:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.865 11:58:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.865 11:58:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.865 11:58:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.865 11:58:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.865 11:58:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.865 11:58:46 -- paths/export.sh@5 -- # export PATH 00:20:52.865 11:58:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.865 11:58:46 -- nvmf/common.sh@46 -- # : 0 00:20:52.865 11:58:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:52.865 11:58:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:52.865 11:58:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:52.865 11:58:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.865 11:58:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.865 11:58:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:52.865 11:58:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:52.865 11:58:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:52.865 11:58:46 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:52.865 11:58:46 -- fips/fips.sh@89 -- # check_openssl_version 00:20:52.865 11:58:46 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:52.865 11:58:46 -- fips/fips.sh@85 -- # openssl version 00:20:52.865 11:58:46 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:52.865 11:58:46 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:52.865 11:58:46 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:52.865 11:58:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:52.865 11:58:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:52.865 11:58:46 -- scripts/common.sh@335 -- # IFS=.-: 00:20:52.865 11:58:46 -- scripts/common.sh@335 -- # read -ra ver1 00:20:52.866 11:58:46 -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.866 11:58:46 -- scripts/common.sh@336 -- # read -ra ver2 00:20:52.866 11:58:46 -- scripts/common.sh@337 -- # local 'op=>=' 00:20:52.866 11:58:46 -- scripts/common.sh@339 -- # ver1_l=3 00:20:52.866 11:58:46 -- scripts/common.sh@340 -- # ver2_l=3 00:20:52.866 11:58:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:52.866 11:58:46 -- scripts/common.sh@343 -- # case "$op" in 00:20:52.866 11:58:46 -- scripts/common.sh@347 -- # : 1 00:20:52.866 11:58:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:52.866 11:58:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.866 11:58:46 -- scripts/common.sh@364 -- # decimal 3 00:20:53.127 11:58:46 -- scripts/common.sh@352 -- # local d=3 00:20:53.127 11:58:46 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:53.127 11:58:46 -- scripts/common.sh@354 -- # echo 3 00:20:53.127 11:58:46 -- scripts/common.sh@364 -- # ver1[v]=3 00:20:53.127 11:58:46 -- scripts/common.sh@365 -- # decimal 3 00:20:53.127 11:58:46 -- scripts/common.sh@352 -- # local d=3 00:20:53.127 11:58:46 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:53.127 11:58:46 -- scripts/common.sh@354 -- # echo 3 00:20:53.127 11:58:46 -- scripts/common.sh@365 -- # ver2[v]=3 00:20:53.127 11:58:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:53.127 11:58:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:53.127 11:58:46 -- scripts/common.sh@363 -- # (( v++ )) 00:20:53.127 11:58:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.127 11:58:46 -- scripts/common.sh@364 -- # decimal 0 00:20:53.127 11:58:46 -- scripts/common.sh@352 -- # local d=0 00:20:53.127 11:58:46 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:53.127 11:58:46 -- scripts/common.sh@354 -- # echo 0 00:20:53.127 11:58:46 -- scripts/common.sh@364 -- # ver1[v]=0 00:20:53.127 11:58:46 -- scripts/common.sh@365 -- # decimal 0 00:20:53.127 11:58:46 -- scripts/common.sh@352 -- # local d=0 00:20:53.127 11:58:46 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:53.127 11:58:46 -- scripts/common.sh@354 -- # echo 0 00:20:53.127 11:58:46 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:53.127 11:58:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:53.127 11:58:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:53.127 11:58:46 -- scripts/common.sh@363 -- # (( v++ )) 00:20:53.127 11:58:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.127 11:58:46 -- scripts/common.sh@364 -- # decimal 9 00:20:53.127 11:58:46 -- scripts/common.sh@352 -- # local d=9 00:20:53.127 11:58:46 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:53.127 11:58:46 -- scripts/common.sh@354 -- # echo 9 00:20:53.127 11:58:46 -- scripts/common.sh@364 -- # ver1[v]=9 00:20:53.127 11:58:46 -- scripts/common.sh@365 -- # decimal 0 00:20:53.127 11:58:46 -- scripts/common.sh@352 -- # local d=0 00:20:53.127 11:58:46 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:53.127 11:58:46 -- scripts/common.sh@354 -- # echo 0 00:20:53.127 11:58:46 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:53.127 11:58:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:53.127 11:58:46 -- scripts/common.sh@366 -- # return 0 00:20:53.127 11:58:46 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:53.127 11:58:46 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:53.127 11:58:46 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:53.127 11:58:46 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:53.127 11:58:46 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:53.127 11:58:46 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:53.127 11:58:46 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:53.127 11:58:46 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:53.127 11:58:46 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:53.127 11:58:46 -- fips/fips.sh@114 -- # build_openssl_config 00:20:53.127 11:58:46 -- fips/fips.sh@37 -- # cat 00:20:53.127 11:58:46 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:53.127 11:58:46 -- fips/fips.sh@58 -- # cat - 00:20:53.127 11:58:46 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:53.127 11:58:46 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:53.127 11:58:46 -- fips/fips.sh@117 -- # mapfile -t providers 00:20:53.127 11:58:46 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:20:53.127 11:58:46 -- fips/fips.sh@117 -- # openssl list -providers 00:20:53.127 11:58:46 -- fips/fips.sh@117 -- # grep name 00:20:53.127 11:58:46 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:53.127 11:58:46 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:53.127 11:58:46 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:53.127 11:58:46 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:53.127 11:58:46 -- common/autotest_common.sh@640 -- # local es=0 00:20:53.127 11:58:46 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:53.127 11:58:46 -- fips/fips.sh@128 -- # : 00:20:53.127 11:58:46 -- common/autotest_common.sh@628 -- # local arg=openssl 00:20:53.127 11:58:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:53.127 11:58:46 -- common/autotest_common.sh@632 -- # type -t openssl 00:20:53.127 11:58:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:53.127 11:58:46 -- common/autotest_common.sh@634 -- # type -P openssl 00:20:53.127 11:58:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:53.127 11:58:46 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:20:53.127 11:58:46 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:20:53.127 11:58:46 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:20:53.127 Error setting digest 00:20:53.127 0002A549A77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:53.127 0002A549A77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:53.127 11:58:46 -- common/autotest_common.sh@643 -- # es=1 00:20:53.127 11:58:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:53.127 11:58:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:53.127 11:58:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:53.127 11:58:46 -- fips/fips.sh@131 -- # nvmftestinit 00:20:53.127 11:58:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:53.127 11:58:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.127 11:58:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:53.127 11:58:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:53.127 11:58:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:53.127 11:58:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.127 11:58:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.127 11:58:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.127 11:58:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:53.127 11:58:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:53.127 11:58:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:53.127 11:58:46 -- common/autotest_common.sh@10 -- # set +x 00:21:01.276 11:58:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:01.276 11:58:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:01.276 11:58:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:01.276 11:58:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:01.276 11:58:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:01.276 11:58:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:01.276 11:58:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:01.276 11:58:53 -- nvmf/common.sh@294 -- # net_devs=() 00:21:01.276 11:58:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:01.276 11:58:53 -- nvmf/common.sh@295 -- # e810=() 00:21:01.276 11:58:53 -- nvmf/common.sh@295 -- # local -ga e810 00:21:01.276 11:58:53 -- nvmf/common.sh@296 -- # x722=() 00:21:01.276 11:58:53 -- nvmf/common.sh@296 -- # local -ga x722 00:21:01.276 11:58:53 -- nvmf/common.sh@297 -- # mlx=() 00:21:01.276 11:58:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:01.276 11:58:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.276 11:58:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:01.276 11:58:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:01.276 11:58:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:01.276 11:58:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:01.276 11:58:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:01.276 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:01.276 11:58:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:01.276 11:58:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:01.276 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:01.276 11:58:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:01.276 11:58:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:01.276 11:58:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.276 11:58:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:01.276 11:58:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.276 11:58:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:01.276 Found net devices under 0000:31:00.0: cvl_0_0 00:21:01.276 11:58:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.276 11:58:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:01.276 11:58:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.276 11:58:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:01.276 11:58:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.276 11:58:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:01.276 Found net devices under 0000:31:00.1: cvl_0_1 00:21:01.276 11:58:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.276 11:58:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:01.276 11:58:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:01.276 11:58:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:01.276 11:58:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:01.276 11:58:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.276 11:58:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.276 11:58:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.276 11:58:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:01.276 11:58:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.276 11:58:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.276 11:58:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:01.276 11:58:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.276 11:58:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.276 11:58:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:01.276 11:58:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:01.276 11:58:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.276 11:58:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.276 11:58:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.276 11:58:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.276 11:58:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:01.276 11:58:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.276 11:58:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.276 11:58:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.276 11:58:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:01.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:21:01.276 00:21:01.276 --- 10.0.0.2 ping statistics --- 00:21:01.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.276 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:21:01.276 11:58:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:21:01.276 00:21:01.276 --- 10.0.0.1 ping statistics --- 00:21:01.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.276 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:21:01.276 11:58:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.276 11:58:54 -- nvmf/common.sh@410 -- # return 0 00:21:01.276 11:58:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:01.276 11:58:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.276 11:58:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:01.276 11:58:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:01.276 11:58:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.276 11:58:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:01.276 11:58:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:01.276 11:58:54 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:01.276 11:58:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:01.276 11:58:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:01.276 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:21:01.276 11:58:54 -- nvmf/common.sh@469 -- # nvmfpid=1985999 00:21:01.276 11:58:54 -- nvmf/common.sh@470 -- # waitforlisten 1985999 00:21:01.276 11:58:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:01.276 11:58:54 -- common/autotest_common.sh@819 -- # '[' -z 1985999 ']' 00:21:01.276 11:58:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.276 11:58:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.276 11:58:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.276 11:58:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.276 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:21:01.276 [2024-06-10 11:58:54.154626] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:01.276 [2024-06-10 11:58:54.154682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.276 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.276 [2024-06-10 11:58:54.236581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.276 [2024-06-10 11:58:54.299546] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:01.276 [2024-06-10 11:58:54.299665] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.276 [2024-06-10 11:58:54.299673] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.276 [2024-06-10 11:58:54.299681] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.276 [2024-06-10 11:58:54.299703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.276 11:58:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:01.276 11:58:54 -- common/autotest_common.sh@852 -- # return 0 00:21:01.276 11:58:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:01.276 11:58:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:01.276 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:21:01.276 11:58:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.276 11:58:54 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:01.276 11:58:54 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:01.277 11:58:54 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:01.277 11:58:54 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:01.277 11:58:54 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:01.277 11:58:54 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:01.277 11:58:54 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:01.277 11:58:54 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:01.538 [2024-06-10 11:58:55.132656] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.538 [2024-06-10 11:58:55.148662] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.538 [2024-06-10 11:58:55.148914] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.538 malloc0 00:21:01.538 11:58:55 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.538 11:58:55 -- fips/fips.sh@148 -- # bdevperf_pid=1986354 00:21:01.538 11:58:55 -- fips/fips.sh@149 -- # waitforlisten 1986354 /var/tmp/bdevperf.sock 00:21:01.538 11:58:55 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.538 11:58:55 -- common/autotest_common.sh@819 -- # '[' -z 1986354 ']' 00:21:01.538 11:58:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.538 11:58:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.538 11:58:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.538 11:58:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.538 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:21:01.538 [2024-06-10 11:58:55.277311] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:01.538 [2024-06-10 11:58:55.277383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1986354 ] 00:21:01.538 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.799 [2024-06-10 11:58:55.332376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.799 [2024-06-10 11:58:55.395208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.370 11:58:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.370 11:58:56 -- common/autotest_common.sh@852 -- # return 0 00:21:02.370 11:58:56 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:02.631 [2024-06-10 11:58:56.159119] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.631 TLSTESTn1 00:21:02.631 11:58:56 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:02.631 Running I/O for 10 seconds... 00:21:14.868 00:21:14.868 Latency(us) 00:21:14.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.868 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:14.868 Verification LBA range: start 0x0 length 0x2000 00:21:14.868 TLSTESTn1 : 10.04 2877.72 11.24 0.00 0.00 44408.06 3713.71 60730.03 00:21:14.868 =================================================================================================================== 00:21:14.868 Total : 2877.72 11.24 0.00 0.00 44408.06 3713.71 60730.03 00:21:14.868 0 00:21:14.868 11:59:06 -- fips/fips.sh@1 -- # cleanup 00:21:14.868 11:59:06 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:14.868 11:59:06 -- common/autotest_common.sh@796 -- # type=--id 00:21:14.868 11:59:06 -- common/autotest_common.sh@797 -- # id=0 00:21:14.868 11:59:06 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:14.868 11:59:06 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:14.868 11:59:06 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:14.868 11:59:06 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:14.868 11:59:06 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:14.868 11:59:06 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:14.868 nvmf_trace.0 00:21:14.868 11:59:06 -- common/autotest_common.sh@811 -- # return 0 00:21:14.868 11:59:06 -- fips/fips.sh@16 -- # killprocess 1986354 00:21:14.868 11:59:06 -- common/autotest_common.sh@926 -- # '[' -z 1986354 ']' 00:21:14.868 11:59:06 -- common/autotest_common.sh@930 -- # kill -0 1986354 00:21:14.868 11:59:06 -- common/autotest_common.sh@931 -- # uname 00:21:14.868 11:59:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:14.868 11:59:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1986354 00:21:14.868 11:59:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:14.868 11:59:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:14.868 11:59:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1986354' 00:21:14.868 killing process with pid 1986354 00:21:14.868 11:59:06 -- common/autotest_common.sh@945 -- # kill 1986354 00:21:14.868 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.868 00:21:14.868 Latency(us) 00:21:14.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.868 =================================================================================================================== 00:21:14.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.868 11:59:06 -- common/autotest_common.sh@950 -- # wait 1986354 00:21:14.868 11:59:06 -- fips/fips.sh@17 -- # nvmftestfini 00:21:14.868 11:59:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:14.868 11:59:06 -- nvmf/common.sh@116 -- # sync 00:21:14.868 11:59:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:14.868 11:59:06 -- nvmf/common.sh@119 -- # set +e 00:21:14.868 11:59:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:14.868 11:59:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:14.868 rmmod nvme_tcp 00:21:14.868 rmmod nvme_fabrics 00:21:14.868 rmmod nvme_keyring 00:21:14.868 11:59:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:14.868 11:59:06 -- nvmf/common.sh@123 -- # set -e 00:21:14.868 11:59:06 -- nvmf/common.sh@124 -- # return 0 00:21:14.868 11:59:06 -- nvmf/common.sh@477 -- # '[' -n 1985999 ']' 00:21:14.868 11:59:06 -- nvmf/common.sh@478 -- # killprocess 1985999 00:21:14.868 11:59:06 -- common/autotest_common.sh@926 -- # '[' -z 1985999 ']' 00:21:14.868 11:59:06 -- common/autotest_common.sh@930 -- # kill -0 1985999 00:21:14.868 11:59:06 -- common/autotest_common.sh@931 -- # uname 00:21:14.868 11:59:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:14.868 11:59:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1985999 00:21:14.868 11:59:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:14.868 11:59:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:14.868 11:59:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1985999' 00:21:14.868 killing process with pid 1985999 00:21:14.868 11:59:06 -- common/autotest_common.sh@945 -- # kill 1985999 00:21:14.868 11:59:06 -- common/autotest_common.sh@950 -- # wait 1985999 00:21:14.868 11:59:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:14.868 11:59:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:14.868 11:59:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:14.868 11:59:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.868 11:59:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:14.868 11:59:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.868 11:59:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.868 11:59:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.441 11:59:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:15.441 11:59:08 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:15.441 00:21:15.441 real 0m22.503s 00:21:15.441 user 0m22.677s 00:21:15.441 sys 0m10.411s 00:21:15.441 11:59:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.441 11:59:08 -- common/autotest_common.sh@10 -- # set +x 00:21:15.441 ************************************ 00:21:15.441 END TEST nvmf_fips 00:21:15.441 ************************************ 00:21:15.441 11:59:09 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:15.441 11:59:09 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:15.441 11:59:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:15.441 11:59:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:15.441 11:59:09 -- common/autotest_common.sh@10 -- # set +x 00:21:15.441 ************************************ 00:21:15.441 START TEST nvmf_fuzz 00:21:15.441 ************************************ 00:21:15.441 11:59:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:15.441 * Looking for test storage... 00:21:15.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:15.441 11:59:09 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.441 11:59:09 -- nvmf/common.sh@7 -- # uname -s 00:21:15.441 11:59:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.441 11:59:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.441 11:59:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.441 11:59:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.441 11:59:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.441 11:59:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.441 11:59:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.441 11:59:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.441 11:59:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.441 11:59:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.441 11:59:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.441 11:59:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.441 11:59:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.441 11:59:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.441 11:59:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.441 11:59:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.441 11:59:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.441 11:59:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.441 11:59:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.441 11:59:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.441 11:59:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.441 11:59:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.441 11:59:09 -- paths/export.sh@5 -- # export PATH 00:21:15.441 11:59:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.441 11:59:09 -- nvmf/common.sh@46 -- # : 0 00:21:15.441 11:59:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:15.441 11:59:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:15.441 11:59:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:15.441 11:59:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.441 11:59:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.441 11:59:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:15.441 11:59:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:15.441 11:59:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:15.441 11:59:09 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:15.441 11:59:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:15.441 11:59:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.441 11:59:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:15.441 11:59:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:15.441 11:59:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:15.441 11:59:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.441 11:59:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.441 11:59:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.441 11:59:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:15.441 11:59:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:15.441 11:59:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:15.441 11:59:09 -- common/autotest_common.sh@10 -- # set +x 00:21:23.591 11:59:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:23.591 11:59:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:23.591 11:59:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:23.591 11:59:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:23.591 11:59:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:23.591 11:59:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:23.591 11:59:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:23.591 11:59:16 -- nvmf/common.sh@294 -- # net_devs=() 00:21:23.591 11:59:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:23.591 11:59:16 -- nvmf/common.sh@295 -- # e810=() 00:21:23.591 11:59:16 -- nvmf/common.sh@295 -- # local -ga e810 00:21:23.591 11:59:16 -- nvmf/common.sh@296 -- # x722=() 00:21:23.591 11:59:16 -- nvmf/common.sh@296 -- # local -ga x722 00:21:23.591 11:59:16 -- nvmf/common.sh@297 -- # mlx=() 00:21:23.591 11:59:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:23.591 11:59:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.591 11:59:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:23.591 11:59:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:23.591 11:59:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:23.591 11:59:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:23.591 11:59:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:23.591 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:23.591 11:59:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:23.591 11:59:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:23.591 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:23.591 11:59:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:23.591 11:59:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:23.591 11:59:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:23.591 11:59:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.591 11:59:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:23.591 11:59:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.591 11:59:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:23.591 Found net devices under 0000:31:00.0: cvl_0_0 00:21:23.591 11:59:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.591 11:59:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:23.592 11:59:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.592 11:59:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:23.592 11:59:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.592 11:59:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:23.592 Found net devices under 0000:31:00.1: cvl_0_1 00:21:23.592 11:59:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.592 11:59:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:23.592 11:59:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:23.592 11:59:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:23.592 11:59:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:23.592 11:59:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:23.592 11:59:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.592 11:59:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.592 11:59:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.592 11:59:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:23.592 11:59:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.592 11:59:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.592 11:59:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:23.592 11:59:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.592 11:59:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.592 11:59:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:23.592 11:59:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:23.592 11:59:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.592 11:59:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.592 11:59:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.592 11:59:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.592 11:59:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:23.592 11:59:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.592 11:59:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.592 11:59:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.592 11:59:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:23.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:21:23.592 00:21:23.592 --- 10.0.0.2 ping statistics --- 00:21:23.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.592 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:21:23.592 11:59:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:21:23.592 00:21:23.592 --- 10.0.0.1 ping statistics --- 00:21:23.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.592 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:21:23.592 11:59:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.592 11:59:16 -- nvmf/common.sh@410 -- # return 0 00:21:23.592 11:59:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:23.592 11:59:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.592 11:59:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:23.592 11:59:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:23.592 11:59:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.592 11:59:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:23.592 11:59:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:23.592 11:59:16 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1992799 00:21:23.592 11:59:16 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:23.592 11:59:16 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:23.592 11:59:16 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1992799 00:21:23.592 11:59:16 -- common/autotest_common.sh@819 -- # '[' -z 1992799 ']' 00:21:23.592 11:59:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.592 11:59:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:23.592 11:59:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.592 11:59:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:23.592 11:59:16 -- common/autotest_common.sh@10 -- # set +x 00:21:23.592 11:59:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:23.592 11:59:17 -- common/autotest_common.sh@852 -- # return 0 00:21:23.592 11:59:17 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.592 11:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.592 11:59:17 -- common/autotest_common.sh@10 -- # set +x 00:21:23.592 11:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.592 11:59:17 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:23.592 11:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.592 11:59:17 -- common/autotest_common.sh@10 -- # set +x 00:21:23.592 Malloc0 00:21:23.592 11:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.592 11:59:17 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.592 11:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.592 11:59:17 -- common/autotest_common.sh@10 -- # set +x 00:21:23.592 11:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.592 11:59:17 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.592 11:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.592 11:59:17 -- common/autotest_common.sh@10 -- # set +x 00:21:23.592 11:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.592 11:59:17 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.592 11:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.592 11:59:17 -- common/autotest_common.sh@10 -- # set +x 00:21:23.592 11:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.592 11:59:17 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:23.592 11:59:17 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:55.729 Fuzzing completed. Shutting down the fuzz application 00:21:55.729 00:21:55.729 Dumping successful admin opcodes: 00:21:55.729 8, 9, 10, 24, 00:21:55.729 Dumping successful io opcodes: 00:21:55.729 0, 9, 00:21:55.729 NS: 0x200003aeff00 I/O qp, Total commands completed: 948222, total successful commands: 5540, random_seed: 966137280 00:21:55.729 NS: 0x200003aeff00 admin qp, Total commands completed: 119810, total successful commands: 981, random_seed: 108580672 00:21:55.729 11:59:47 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:55.729 Fuzzing completed. Shutting down the fuzz application 00:21:55.729 00:21:55.729 Dumping successful admin opcodes: 00:21:55.729 24, 00:21:55.729 Dumping successful io opcodes: 00:21:55.729 00:21:55.729 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3010708517 00:21:55.729 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3010790827 00:21:55.729 11:59:48 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.729 11:59:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.729 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:21:55.729 11:59:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.729 11:59:48 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:55.729 11:59:48 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:55.729 11:59:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:55.729 11:59:48 -- nvmf/common.sh@116 -- # sync 00:21:55.729 11:59:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:55.729 11:59:48 -- nvmf/common.sh@119 -- # set +e 00:21:55.729 11:59:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:55.729 11:59:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:55.729 rmmod nvme_tcp 00:21:55.729 rmmod nvme_fabrics 00:21:55.729 rmmod nvme_keyring 00:21:55.729 11:59:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:55.729 11:59:48 -- nvmf/common.sh@123 -- # set -e 00:21:55.729 11:59:48 -- nvmf/common.sh@124 -- # return 0 00:21:55.729 11:59:48 -- nvmf/common.sh@477 -- # '[' -n 1992799 ']' 00:21:55.729 11:59:48 -- nvmf/common.sh@478 -- # killprocess 1992799 00:21:55.729 11:59:48 -- common/autotest_common.sh@926 -- # '[' -z 1992799 ']' 00:21:55.729 11:59:48 -- common/autotest_common.sh@930 -- # kill -0 1992799 00:21:55.729 11:59:48 -- common/autotest_common.sh@931 -- # uname 00:21:55.729 11:59:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:55.729 11:59:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1992799 00:21:55.729 11:59:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:55.729 11:59:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:55.729 11:59:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1992799' 00:21:55.729 killing process with pid 1992799 00:21:55.729 11:59:49 -- common/autotest_common.sh@945 -- # kill 1992799 00:21:55.729 11:59:49 -- common/autotest_common.sh@950 -- # wait 1992799 00:21:55.729 11:59:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:55.729 11:59:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:55.729 11:59:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:55.729 11:59:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.729 11:59:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:55.729 11:59:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.729 11:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.729 11:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.670 11:59:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:57.670 11:59:51 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:57.670 00:21:57.670 real 0m42.243s 00:21:57.670 user 0m55.876s 00:21:57.670 sys 0m15.556s 00:21:57.670 11:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.670 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:21:57.670 ************************************ 00:21:57.670 END TEST nvmf_fuzz 00:21:57.670 ************************************ 00:21:57.670 11:59:51 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:57.670 11:59:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:57.670 11:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:57.670 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:21:57.670 ************************************ 00:21:57.670 START TEST nvmf_multiconnection 00:21:57.670 ************************************ 00:21:57.670 11:59:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:57.670 * Looking for test storage... 00:21:57.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.670 11:59:51 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.670 11:59:51 -- nvmf/common.sh@7 -- # uname -s 00:21:57.670 11:59:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.670 11:59:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.670 11:59:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.670 11:59:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.670 11:59:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.670 11:59:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.670 11:59:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.670 11:59:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.670 11:59:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.670 11:59:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.931 11:59:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.931 11:59:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.931 11:59:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.931 11:59:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.931 11:59:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.931 11:59:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.931 11:59:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.931 11:59:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.931 11:59:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.931 11:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.931 11:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.931 11:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.931 11:59:51 -- paths/export.sh@5 -- # export PATH 00:21:57.931 11:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.931 11:59:51 -- nvmf/common.sh@46 -- # : 0 00:21:57.931 11:59:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:57.931 11:59:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:57.931 11:59:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:57.931 11:59:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.931 11:59:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.931 11:59:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:57.931 11:59:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:57.931 11:59:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:57.931 11:59:51 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:57.931 11:59:51 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:57.931 11:59:51 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:57.931 11:59:51 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:57.931 11:59:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:57.931 11:59:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.931 11:59:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:57.931 11:59:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:57.931 11:59:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:57.931 11:59:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.931 11:59:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.931 11:59:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.931 11:59:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:57.931 11:59:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:57.931 11:59:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:57.931 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:06.073 11:59:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:06.073 11:59:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:06.073 11:59:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:06.073 11:59:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:06.073 11:59:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:06.073 11:59:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:06.073 11:59:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:06.073 11:59:58 -- nvmf/common.sh@294 -- # net_devs=() 00:22:06.073 11:59:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:06.073 11:59:58 -- nvmf/common.sh@295 -- # e810=() 00:22:06.073 11:59:58 -- nvmf/common.sh@295 -- # local -ga e810 00:22:06.073 11:59:58 -- nvmf/common.sh@296 -- # x722=() 00:22:06.073 11:59:58 -- nvmf/common.sh@296 -- # local -ga x722 00:22:06.073 11:59:58 -- nvmf/common.sh@297 -- # mlx=() 00:22:06.073 11:59:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:06.073 11:59:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.073 11:59:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:06.073 11:59:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:06.073 11:59:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:06.073 11:59:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:06.073 11:59:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:06.073 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:06.073 11:59:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:06.073 11:59:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:06.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:06.073 11:59:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:06.073 11:59:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:06.073 11:59:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.073 11:59:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:06.073 11:59:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.073 11:59:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:06.073 Found net devices under 0000:31:00.0: cvl_0_0 00:22:06.073 11:59:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.073 11:59:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:06.073 11:59:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.073 11:59:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:06.073 11:59:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.073 11:59:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:06.073 Found net devices under 0000:31:00.1: cvl_0_1 00:22:06.073 11:59:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.073 11:59:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:06.073 11:59:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:06.073 11:59:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:06.073 11:59:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:06.073 11:59:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.073 11:59:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.073 11:59:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.073 11:59:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:06.073 11:59:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.073 11:59:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.073 11:59:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:06.073 11:59:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.073 11:59:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.073 11:59:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:06.073 11:59:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:06.073 11:59:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.073 11:59:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.073 11:59:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.073 11:59:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.073 11:59:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:06.073 11:59:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.073 11:59:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.073 11:59:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.073 11:59:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:06.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:22:06.073 00:22:06.074 --- 10.0.0.2 ping statistics --- 00:22:06.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.074 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:22:06.074 11:59:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:22:06.074 00:22:06.074 --- 10.0.0.1 ping statistics --- 00:22:06.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.074 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:22:06.074 11:59:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.074 11:59:58 -- nvmf/common.sh@410 -- # return 0 00:22:06.074 11:59:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:06.074 11:59:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.074 11:59:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:06.074 11:59:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:06.074 11:59:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.074 11:59:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:06.074 11:59:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:06.074 11:59:58 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:06.074 11:59:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:06.074 11:59:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:06.074 11:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:58 -- nvmf/common.sh@469 -- # nvmfpid=2003446 00:22:06.074 11:59:58 -- nvmf/common.sh@470 -- # waitforlisten 2003446 00:22:06.074 11:59:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:06.074 11:59:58 -- common/autotest_common.sh@819 -- # '[' -z 2003446 ']' 00:22:06.074 11:59:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.074 11:59:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:06.074 11:59:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.074 11:59:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:06.074 11:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 [2024-06-10 11:59:58.825931] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:06.074 [2024-06-10 11:59:58.826017] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.074 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.074 [2024-06-10 11:59:58.903944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.074 [2024-06-10 11:59:58.978361] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:06.074 [2024-06-10 11:59:58.978501] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.074 [2024-06-10 11:59:58.978511] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.074 [2024-06-10 11:59:58.978519] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.074 [2024-06-10 11:59:58.978670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.074 [2024-06-10 11:59:58.978786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.074 [2024-06-10 11:59:58.978944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.074 [2024-06-10 11:59:58.978945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.074 11:59:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:06.074 11:59:59 -- common/autotest_common.sh@852 -- # return 0 00:22:06.074 11:59:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:06.074 11:59:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.074 11:59:59 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 [2024-06-10 11:59:59.645415] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:06.074 11:59:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.074 11:59:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 Malloc1 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 [2024-06-10 11:59:59.710335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.074 11:59:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 Malloc2 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.074 11:59:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 Malloc3 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.074 11:59:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 Malloc4 00:22:06.074 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.074 11:59:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:06.074 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.074 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.335 11:59:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 Malloc5 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.335 11:59:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 Malloc6 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.335 11:59:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 Malloc7 00:22:06.335 11:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 11:59:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:06.335 11:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 12:00:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:06.335 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 12:00:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:06.335 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 12:00:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.335 12:00:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:06.335 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 Malloc8 00:22:06.335 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 12:00:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:06.335 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 12:00:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:06.335 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 12:00:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:06.335 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.335 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.335 12:00:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.335 12:00:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:06.335 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.335 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 Malloc9 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.597 12:00:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 Malloc10 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.597 12:00:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 Malloc11 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:06.597 12:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 12:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 12:00:00 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:06.597 12:00:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.597 12:00:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:07.978 12:00:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:07.978 12:00:01 -- common/autotest_common.sh@1177 -- # local i=0 00:22:07.978 12:00:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.978 12:00:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:07.978 12:00:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:10.536 12:00:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:10.536 12:00:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:10.536 12:00:03 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:10.536 12:00:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:10.536 12:00:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.536 12:00:03 -- common/autotest_common.sh@1187 -- # return 0 00:22:10.536 12:00:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:10.536 12:00:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:11.921 12:00:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:11.921 12:00:05 -- common/autotest_common.sh@1177 -- # local i=0 00:22:11.921 12:00:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:11.921 12:00:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:11.921 12:00:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:13.834 12:00:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:13.834 12:00:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:13.834 12:00:07 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:13.834 12:00:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:13.834 12:00:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:13.834 12:00:07 -- common/autotest_common.sh@1187 -- # return 0 00:22:13.834 12:00:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:13.834 12:00:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:15.220 12:00:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:15.220 12:00:08 -- common/autotest_common.sh@1177 -- # local i=0 00:22:15.220 12:00:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:15.220 12:00:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:15.220 12:00:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:17.140 12:00:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:17.140 12:00:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:17.140 12:00:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:17.140 12:00:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:17.140 12:00:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.140 12:00:10 -- common/autotest_common.sh@1187 -- # return 0 00:22:17.140 12:00:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.140 12:00:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:19.057 12:00:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:19.057 12:00:12 -- common/autotest_common.sh@1177 -- # local i=0 00:22:19.057 12:00:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:19.057 12:00:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:19.057 12:00:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:20.971 12:00:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:20.971 12:00:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:20.971 12:00:14 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:20.971 12:00:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:20.971 12:00:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:20.971 12:00:14 -- common/autotest_common.sh@1187 -- # return 0 00:22:20.972 12:00:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:20.972 12:00:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:22.357 12:00:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:22.357 12:00:16 -- common/autotest_common.sh@1177 -- # local i=0 00:22:22.357 12:00:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:22.357 12:00:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:22.357 12:00:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:24.898 12:00:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:24.898 12:00:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:24.898 12:00:18 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:24.898 12:00:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:24.898 12:00:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:24.898 12:00:18 -- common/autotest_common.sh@1187 -- # return 0 00:22:24.898 12:00:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.898 12:00:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:26.290 12:00:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:26.290 12:00:19 -- common/autotest_common.sh@1177 -- # local i=0 00:22:26.290 12:00:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:26.290 12:00:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:26.290 12:00:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:28.210 12:00:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:28.210 12:00:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:28.210 12:00:21 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:28.210 12:00:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:28.210 12:00:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:28.210 12:00:21 -- common/autotest_common.sh@1187 -- # return 0 00:22:28.210 12:00:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.210 12:00:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:30.124 12:00:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:30.124 12:00:23 -- common/autotest_common.sh@1177 -- # local i=0 00:22:30.124 12:00:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:30.124 12:00:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:30.124 12:00:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:32.035 12:00:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:32.035 12:00:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:32.035 12:00:25 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:32.035 12:00:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:32.035 12:00:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:32.035 12:00:25 -- common/autotest_common.sh@1187 -- # return 0 00:22:32.035 12:00:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.035 12:00:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:33.946 12:00:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:33.946 12:00:27 -- common/autotest_common.sh@1177 -- # local i=0 00:22:33.946 12:00:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:33.946 12:00:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:33.946 12:00:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:35.855 12:00:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:35.855 12:00:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:35.855 12:00:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:35.855 12:00:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:35.855 12:00:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:35.855 12:00:29 -- common/autotest_common.sh@1187 -- # return 0 00:22:35.855 12:00:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.855 12:00:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:37.318 12:00:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:37.318 12:00:30 -- common/autotest_common.sh@1177 -- # local i=0 00:22:37.319 12:00:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:37.319 12:00:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:37.319 12:00:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:39.241 12:00:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:39.241 12:00:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:39.241 12:00:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:39.241 12:00:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:39.241 12:00:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:39.241 12:00:32 -- common/autotest_common.sh@1187 -- # return 0 00:22:39.241 12:00:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.241 12:00:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:41.145 12:00:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:41.146 12:00:34 -- common/autotest_common.sh@1177 -- # local i=0 00:22:41.146 12:00:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:41.146 12:00:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:41.146 12:00:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:43.061 12:00:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:43.061 12:00:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:43.061 12:00:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:43.061 12:00:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:43.061 12:00:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:43.061 12:00:36 -- common/autotest_common.sh@1187 -- # return 0 00:22:43.061 12:00:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.061 12:00:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:44.973 12:00:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:44.973 12:00:38 -- common/autotest_common.sh@1177 -- # local i=0 00:22:44.973 12:00:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:44.973 12:00:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:44.973 12:00:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:46.886 12:00:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:46.886 12:00:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:46.886 12:00:40 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:46.886 12:00:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:46.886 12:00:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:46.886 12:00:40 -- common/autotest_common.sh@1187 -- # return 0 00:22:46.886 12:00:40 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:46.886 [global] 00:22:46.886 thread=1 00:22:46.886 invalidate=1 00:22:46.886 rw=read 00:22:46.886 time_based=1 00:22:46.886 runtime=10 00:22:46.886 ioengine=libaio 00:22:46.886 direct=1 00:22:46.886 bs=262144 00:22:46.886 iodepth=64 00:22:46.886 norandommap=1 00:22:46.886 numjobs=1 00:22:46.886 00:22:46.886 [job0] 00:22:46.886 filename=/dev/nvme0n1 00:22:46.886 [job1] 00:22:46.886 filename=/dev/nvme10n1 00:22:46.886 [job2] 00:22:46.886 filename=/dev/nvme1n1 00:22:46.886 [job3] 00:22:46.886 filename=/dev/nvme2n1 00:22:46.886 [job4] 00:22:46.886 filename=/dev/nvme3n1 00:22:46.886 [job5] 00:22:46.886 filename=/dev/nvme4n1 00:22:46.886 [job6] 00:22:46.886 filename=/dev/nvme5n1 00:22:46.886 [job7] 00:22:46.886 filename=/dev/nvme6n1 00:22:46.886 [job8] 00:22:46.886 filename=/dev/nvme7n1 00:22:46.886 [job9] 00:22:46.886 filename=/dev/nvme8n1 00:22:46.886 [job10] 00:22:46.886 filename=/dev/nvme9n1 00:22:47.148 Could not set queue depth (nvme0n1) 00:22:47.148 Could not set queue depth (nvme10n1) 00:22:47.148 Could not set queue depth (nvme1n1) 00:22:47.148 Could not set queue depth (nvme2n1) 00:22:47.148 Could not set queue depth (nvme3n1) 00:22:47.148 Could not set queue depth (nvme4n1) 00:22:47.148 Could not set queue depth (nvme5n1) 00:22:47.148 Could not set queue depth (nvme6n1) 00:22:47.148 Could not set queue depth (nvme7n1) 00:22:47.148 Could not set queue depth (nvme8n1) 00:22:47.148 Could not set queue depth (nvme9n1) 00:22:47.408 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:47.408 fio-3.35 00:22:47.408 Starting 11 threads 00:22:59.638 00:22:59.638 job0: (groupid=0, jobs=1): err= 0: pid=2012701: Mon Jun 10 12:00:51 2024 00:22:59.638 read: IOPS=903, BW=226MiB/s (237MB/s)(2269MiB/10052msec) 00:22:59.638 slat (usec): min=7, max=99986, avg=1037.65, stdev=3445.04 00:22:59.638 clat (msec): min=6, max=221, avg=69.77, stdev=34.39 00:22:59.638 lat (msec): min=6, max=226, avg=70.80, stdev=34.95 00:22:59.638 clat percentiles (msec): 00:22:59.638 | 1.00th=[ 16], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 42], 00:22:59.638 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 59], 60.00th=[ 73], 00:22:59.638 | 70.00th=[ 84], 80.00th=[ 105], 90.00th=[ 123], 95.00th=[ 138], 00:22:59.638 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 178], 00:22:59.638 | 99.99th=[ 222] 00:22:59.638 bw ( KiB/s): min=122880, max=376832, per=9.45%, avg=230758.40, stdev=87140.93, samples=20 00:22:59.638 iops : min= 480, max= 1472, avg=901.40, stdev=340.39, samples=20 00:22:59.638 lat (msec) : 10=0.13%, 20=1.48%, 50=40.05%, 100=35.60%, 250=22.75% 00:22:59.638 cpu : usr=0.40%, sys=3.07%, ctx=2125, majf=0, minf=3534 00:22:59.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:59.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.638 issued rwts: total=9077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.638 job1: (groupid=0, jobs=1): err= 0: pid=2012702: Mon Jun 10 12:00:51 2024 00:22:59.638 read: IOPS=849, BW=212MiB/s (223MB/s)(2135MiB/10052msec) 00:22:59.638 slat (usec): min=6, max=45095, avg=1017.23, stdev=2776.55 00:22:59.638 clat (msec): min=2, max=182, avg=74.27, stdev=24.67 00:22:59.638 lat (msec): min=2, max=197, avg=75.29, stdev=24.86 00:22:59.638 clat percentiles (msec): 00:22:59.638 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 58], 00:22:59.638 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 79], 00:22:59.638 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 102], 95.00th=[ 112], 00:22:59.638 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 182], 00:22:59.638 | 99.99th=[ 182] 00:22:59.638 bw ( KiB/s): min=158208, max=340992, per=8.89%, avg=216980.35, stdev=50282.00, samples=20 00:22:59.638 iops : min= 618, max= 1332, avg=847.55, stdev=196.42, samples=20 00:22:59.638 lat (msec) : 4=0.11%, 10=1.79%, 20=1.00%, 50=9.04%, 100=76.93% 00:22:59.638 lat (msec) : 250=11.14% 00:22:59.638 cpu : usr=0.38%, sys=2.92%, ctx=1947, majf=0, minf=4097 00:22:59.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:59.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.638 issued rwts: total=8538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.638 job2: (groupid=0, jobs=1): err= 0: pid=2012704: Mon Jun 10 12:00:51 2024 00:22:59.638 read: IOPS=965, BW=241MiB/s (253MB/s)(2430MiB/10067msec) 00:22:59.638 slat (usec): min=5, max=133429, avg=826.61, stdev=4074.20 00:22:59.638 clat (msec): min=2, max=273, avg=65.39, stdev=40.34 00:22:59.638 lat (msec): min=2, max=273, avg=66.21, stdev=40.98 00:22:59.638 clat percentiles (msec): 00:22:59.638 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 26], 20.00th=[ 30], 00:22:59.638 | 30.00th=[ 37], 40.00th=[ 44], 50.00th=[ 54], 60.00th=[ 67], 00:22:59.638 | 70.00th=[ 79], 80.00th=[ 101], 90.00th=[ 134], 95.00th=[ 146], 00:22:59.638 | 99.00th=[ 161], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 203], 00:22:59.638 | 99.99th=[ 275] 00:22:59.638 bw ( KiB/s): min=97280, max=550912, per=10.13%, avg=247207.25, stdev=116447.69, samples=20 00:22:59.638 iops : min= 380, max= 2152, avg=965.65, stdev=454.88, samples=20 00:22:59.638 lat (msec) : 4=0.15%, 10=1.48%, 20=3.20%, 50=43.05%, 100=32.05% 00:22:59.638 lat (msec) : 250=20.05%, 500=0.02% 00:22:59.638 cpu : usr=0.36%, sys=2.68%, ctx=2430, majf=0, minf=4097 00:22:59.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:59.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.638 issued rwts: total=9720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.638 job3: (groupid=0, jobs=1): err= 0: pid=2012705: Mon Jun 10 12:00:51 2024 00:22:59.638 read: IOPS=807, BW=202MiB/s (212MB/s)(2026MiB/10038msec) 00:22:59.638 slat (usec): min=6, max=87738, avg=957.15, stdev=4277.37 00:22:59.638 clat (msec): min=2, max=236, avg=78.22, stdev=41.52 00:22:59.638 lat (msec): min=2, max=236, avg=79.18, stdev=42.13 00:22:59.638 clat percentiles (msec): 00:22:59.638 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 22], 20.00th=[ 37], 00:22:59.638 | 30.00th=[ 51], 40.00th=[ 67], 50.00th=[ 78], 60.00th=[ 91], 00:22:59.638 | 70.00th=[ 105], 80.00th=[ 116], 90.00th=[ 138], 95.00th=[ 146], 00:22:59.638 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 201], 99.95th=[ 222], 00:22:59.638 | 99.99th=[ 236] 00:22:59.638 bw ( KiB/s): min=97280, max=325120, per=8.43%, avg=205849.60, stdev=62756.97, samples=20 00:22:59.638 iops : min= 380, max= 1270, avg=804.10, stdev=245.14, samples=20 00:22:59.638 lat (msec) : 4=0.26%, 10=2.53%, 20=5.32%, 50=21.75%, 100=37.23% 00:22:59.638 lat (msec) : 250=32.91% 00:22:59.638 cpu : usr=0.39%, sys=2.56%, ctx=2219, majf=0, minf=4097 00:22:59.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:59.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.638 issued rwts: total=8104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.639 job4: (groupid=0, jobs=1): err= 0: pid=2012706: Mon Jun 10 12:00:51 2024 00:22:59.639 read: IOPS=1062, BW=266MiB/s (279MB/s)(2674MiB/10065msec) 00:22:59.639 slat (usec): min=5, max=117418, avg=790.86, stdev=3295.82 00:22:59.639 clat (msec): min=2, max=255, avg=59.38, stdev=35.38 00:22:59.639 lat (msec): min=2, max=255, avg=60.17, stdev=35.89 00:22:59.639 clat percentiles (msec): 00:22:59.639 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 21], 20.00th=[ 29], 00:22:59.639 | 30.00th=[ 32], 40.00th=[ 43], 50.00th=[ 58], 60.00th=[ 68], 00:22:59.639 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 101], 95.00th=[ 136], 00:22:59.639 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 180], 99.95th=[ 180], 00:22:59.639 | 99.99th=[ 228] 00:22:59.639 bw ( KiB/s): min=116736, max=475648, per=11.15%, avg=272244.90, stdev=109420.62, samples=20 00:22:59.639 iops : min= 456, max= 1858, avg=1063.45, stdev=427.43, samples=20 00:22:59.639 lat (msec) : 4=0.30%, 10=2.38%, 20=6.32%, 50=35.67%, 100=45.42% 00:22:59.639 lat (msec) : 250=9.89%, 500=0.01% 00:22:59.639 cpu : usr=0.47%, sys=2.89%, ctx=2644, majf=0, minf=4097 00:22:59.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.639 issued rwts: total=10697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.639 job5: (groupid=0, jobs=1): err= 0: pid=2012707: Mon Jun 10 12:00:51 2024 00:22:59.639 read: IOPS=939, BW=235MiB/s (246MB/s)(2361MiB/10048msec) 00:22:59.639 slat (usec): min=5, max=122242, avg=945.31, stdev=3848.08 00:22:59.639 clat (usec): min=1894, max=241290, avg=67084.80, stdev=34445.18 00:22:59.639 lat (usec): min=1947, max=268397, avg=68030.11, stdev=35017.99 00:22:59.639 clat percentiles (msec): 00:22:59.639 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 39], 00:22:59.639 | 30.00th=[ 44], 40.00th=[ 52], 50.00th=[ 61], 60.00th=[ 68], 00:22:59.639 | 70.00th=[ 78], 80.00th=[ 93], 90.00th=[ 125], 95.00th=[ 136], 00:22:59.639 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 194], 99.95th=[ 220], 00:22:59.639 | 99.99th=[ 243] 00:22:59.639 bw ( KiB/s): min=122880, max=460288, per=9.84%, avg=240152.60, stdev=88760.47, samples=20 00:22:59.639 iops : min= 480, max= 1798, avg=938.05, stdev=346.72, samples=20 00:22:59.639 lat (msec) : 2=0.01%, 4=0.43%, 10=0.73%, 20=1.48%, 50=35.59% 00:22:59.639 lat (msec) : 100=43.96%, 250=17.79% 00:22:59.639 cpu : usr=0.42%, sys=2.92%, ctx=2202, majf=0, minf=4097 00:22:59.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.639 issued rwts: total=9443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.639 job6: (groupid=0, jobs=1): err= 0: pid=2012708: Mon Jun 10 12:00:51 2024 00:22:59.639 read: IOPS=928, BW=232MiB/s (243MB/s)(2337MiB/10072msec) 00:22:59.639 slat (usec): min=5, max=104099, avg=903.98, stdev=2971.14 00:22:59.639 clat (msec): min=4, max=241, avg=67.97, stdev=26.23 00:22:59.639 lat (msec): min=4, max=241, avg=68.88, stdev=26.53 00:22:59.639 clat percentiles (msec): 00:22:59.639 | 1.00th=[ 21], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:22:59.639 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 00:22:59.639 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 100], 95.00th=[ 111], 00:22:59.639 | 99.00th=[ 140], 99.50th=[ 180], 99.90th=[ 239], 99.95th=[ 239], 00:22:59.639 | 99.99th=[ 243] 00:22:59.639 bw ( KiB/s): min=170496, max=352256, per=9.74%, avg=237696.00, stdev=54956.34, samples=20 00:22:59.639 iops : min= 666, max= 1376, avg=928.50, stdev=214.67, samples=20 00:22:59.639 lat (msec) : 10=0.20%, 20=0.77%, 50=22.88%, 100=66.73%, 250=9.41% 00:22:59.639 cpu : usr=0.25%, sys=2.79%, ctx=2143, majf=0, minf=4097 00:22:59.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.639 issued rwts: total=9348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.639 job7: (groupid=0, jobs=1): err= 0: pid=2012709: Mon Jun 10 12:00:51 2024 00:22:59.639 read: IOPS=720, BW=180MiB/s (189MB/s)(1811MiB/10057msec) 00:22:59.639 slat (usec): min=5, max=117991, avg=1157.94, stdev=4978.07 00:22:59.639 clat (msec): min=3, max=250, avg=87.61, stdev=41.60 00:22:59.639 lat (msec): min=3, max=252, avg=88.77, stdev=42.36 00:22:59.639 clat percentiles (msec): 00:22:59.639 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 25], 20.00th=[ 45], 00:22:59.639 | 30.00th=[ 68], 40.00th=[ 82], 50.00th=[ 93], 60.00th=[ 104], 00:22:59.639 | 70.00th=[ 114], 80.00th=[ 126], 90.00th=[ 140], 95.00th=[ 148], 00:22:59.639 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 186], 99.95th=[ 218], 00:22:59.639 | 99.99th=[ 251] 00:22:59.639 bw ( KiB/s): min=125952, max=277504, per=7.53%, avg=183833.35, stdev=49375.53, samples=20 00:22:59.639 iops : min= 492, max= 1084, avg=718.05, stdev=192.80, samples=20 00:22:59.639 lat (msec) : 4=0.10%, 10=1.53%, 20=5.22%, 50=15.52%, 100=34.09% 00:22:59.639 lat (msec) : 250=43.53%, 500=0.01% 00:22:59.639 cpu : usr=0.31%, sys=2.24%, ctx=1977, majf=0, minf=4097 00:22:59.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.639 issued rwts: total=7243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.639 job8: (groupid=0, jobs=1): err= 0: pid=2012710: Mon Jun 10 12:00:51 2024 00:22:59.639 read: IOPS=804, BW=201MiB/s (211MB/s)(2015MiB/10019msec) 00:22:59.639 slat (usec): min=5, max=106018, avg=1062.18, stdev=3872.75 00:22:59.639 clat (usec): min=1357, max=218689, avg=78414.69, stdev=47110.23 00:22:59.639 lat (usec): min=1409, max=243939, avg=79476.87, stdev=47817.60 00:22:59.639 clat percentiles (msec): 00:22:59.639 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 22], 20.00th=[ 30], 00:22:59.639 | 30.00th=[ 34], 40.00th=[ 55], 50.00th=[ 80], 60.00th=[ 97], 00:22:59.639 | 70.00th=[ 111], 80.00th=[ 127], 90.00th=[ 144], 95.00th=[ 153], 00:22:59.639 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 186], 99.95th=[ 192], 00:22:59.639 | 99.99th=[ 220] 00:22:59.639 bw ( KiB/s): min=104448, max=507392, per=8.39%, avg=204723.20, stdev=103162.64, samples=20 00:22:59.639 iops : min= 408, max= 1982, avg=799.70, stdev=402.98, samples=20 00:22:59.639 lat (msec) : 2=0.05%, 4=0.91%, 10=2.43%, 20=5.51%, 50=28.95% 00:22:59.639 lat (msec) : 100=24.45%, 250=37.70% 00:22:59.639 cpu : usr=0.34%, sys=2.45%, ctx=2079, majf=0, minf=4097 00:22:59.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.639 issued rwts: total=8060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.639 job9: (groupid=0, jobs=1): err= 0: pid=2012711: Mon Jun 10 12:00:51 2024 00:22:59.639 read: IOPS=674, BW=169MiB/s (177MB/s)(1699MiB/10068msec) 00:22:59.639 slat (usec): min=5, max=116985, avg=1279.24, stdev=4735.40 00:22:59.639 clat (msec): min=4, max=220, avg=93.44, stdev=43.22 00:22:59.639 lat (msec): min=4, max=278, avg=94.72, stdev=43.97 00:22:59.639 clat percentiles (msec): 00:22:59.639 | 1.00th=[ 10], 5.00th=[ 14], 10.00th=[ 27], 20.00th=[ 48], 00:22:59.639 | 30.00th=[ 75], 40.00th=[ 91], 50.00th=[ 103], 60.00th=[ 112], 00:22:59.639 | 70.00th=[ 122], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 153], 00:22:59.639 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 188], 99.95th=[ 203], 00:22:59.639 | 99.99th=[ 222] 00:22:59.639 bw ( KiB/s): min=103424, max=364544, per=7.06%, avg=172339.20, stdev=62517.52, samples=20 00:22:59.639 iops : min= 404, max= 1424, avg=673.20, stdev=244.21, samples=20 00:22:59.639 lat (msec) : 10=1.97%, 20=5.46%, 50=13.79%, 100=25.93%, 250=52.85% 00:22:59.639 cpu : usr=0.31%, sys=2.24%, ctx=1733, majf=0, minf=4097 00:22:59.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.639 issued rwts: total=6795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.639 job10: (groupid=0, jobs=1): err= 0: pid=2012712: Mon Jun 10 12:00:51 2024 00:22:59.639 read: IOPS=897, BW=224MiB/s (235MB/s)(2256MiB/10052msec) 00:22:59.639 slat (usec): min=6, max=32088, avg=1071.44, stdev=2728.64 00:22:59.639 clat (msec): min=11, max=135, avg=70.16, stdev=17.23 00:22:59.639 lat (msec): min=11, max=136, avg=71.23, stdev=17.43 00:22:59.639 clat percentiles (msec): 00:22:59.639 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:22:59.639 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 73], 00:22:59.639 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 100], 00:22:59.639 | 99.00th=[ 116], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 133], 00:22:59.639 | 99.99th=[ 136] 00:22:59.639 bw ( KiB/s): min=151552, max=299008, per=9.40%, avg=229376.00, stdev=42217.99, samples=20 00:22:59.639 iops : min= 592, max= 1168, avg=896.00, stdev=164.91, samples=20 00:22:59.639 lat (msec) : 20=0.18%, 50=11.88%, 100=83.21%, 250=4.73% 00:22:59.639 cpu : usr=0.32%, sys=3.14%, ctx=1917, majf=0, minf=4097 00:22:59.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.639 issued rwts: total=9023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.639 00:22:59.639 Run status group 0 (all jobs): 00:22:59.639 READ: bw=2384MiB/s (2500MB/s), 169MiB/s-266MiB/s (177MB/s-279MB/s), io=23.4GiB (25.2GB), run=10019-10072msec 00:22:59.639 00:22:59.639 Disk stats (read/write): 00:22:59.639 nvme0n1: ios=17801/0, merge=0/0, ticks=1217564/0, in_queue=1217564, util=96.49% 00:22:59.639 nvme10n1: ios=16696/0, merge=0/0, ticks=1222486/0, in_queue=1222486, util=96.71% 00:22:59.639 nvme1n1: ios=19079/0, merge=0/0, ticks=1223304/0, in_queue=1223304, util=97.10% 00:22:59.639 nvme2n1: ios=15762/0, merge=0/0, ticks=1222082/0, in_queue=1222082, util=97.32% 00:22:59.640 nvme3n1: ios=21066/0, merge=0/0, ticks=1223237/0, in_queue=1223237, util=97.42% 00:22:59.640 nvme4n1: ios=18454/0, merge=0/0, ticks=1220065/0, in_queue=1220065, util=97.88% 00:22:59.640 nvme5n1: ios=18355/0, merge=0/0, ticks=1222494/0, in_queue=1222494, util=98.10% 00:22:59.640 nvme6n1: ios=14137/0, merge=0/0, ticks=1221168/0, in_queue=1221168, util=98.27% 00:22:59.640 nvme7n1: ios=15460/0, merge=0/0, ticks=1222964/0, in_queue=1222964, util=98.83% 00:22:59.640 nvme8n1: ios=13258/0, merge=0/0, ticks=1218412/0, in_queue=1218412, util=99.06% 00:22:59.640 nvme9n1: ios=17628/0, merge=0/0, ticks=1218045/0, in_queue=1218045, util=99.21% 00:22:59.640 12:00:51 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:59.640 [global] 00:22:59.640 thread=1 00:22:59.640 invalidate=1 00:22:59.640 rw=randwrite 00:22:59.640 time_based=1 00:22:59.640 runtime=10 00:22:59.640 ioengine=libaio 00:22:59.640 direct=1 00:22:59.640 bs=262144 00:22:59.640 iodepth=64 00:22:59.640 norandommap=1 00:22:59.640 numjobs=1 00:22:59.640 00:22:59.640 [job0] 00:22:59.640 filename=/dev/nvme0n1 00:22:59.640 [job1] 00:22:59.640 filename=/dev/nvme10n1 00:22:59.640 [job2] 00:22:59.640 filename=/dev/nvme1n1 00:22:59.640 [job3] 00:22:59.640 filename=/dev/nvme2n1 00:22:59.640 [job4] 00:22:59.640 filename=/dev/nvme3n1 00:22:59.640 [job5] 00:22:59.640 filename=/dev/nvme4n1 00:22:59.640 [job6] 00:22:59.640 filename=/dev/nvme5n1 00:22:59.640 [job7] 00:22:59.640 filename=/dev/nvme6n1 00:22:59.640 [job8] 00:22:59.640 filename=/dev/nvme7n1 00:22:59.640 [job9] 00:22:59.640 filename=/dev/nvme8n1 00:22:59.640 [job10] 00:22:59.640 filename=/dev/nvme9n1 00:22:59.640 Could not set queue depth (nvme0n1) 00:22:59.640 Could not set queue depth (nvme10n1) 00:22:59.640 Could not set queue depth (nvme1n1) 00:22:59.640 Could not set queue depth (nvme2n1) 00:22:59.640 Could not set queue depth (nvme3n1) 00:22:59.640 Could not set queue depth (nvme4n1) 00:22:59.640 Could not set queue depth (nvme5n1) 00:22:59.640 Could not set queue depth (nvme6n1) 00:22:59.640 Could not set queue depth (nvme7n1) 00:22:59.640 Could not set queue depth (nvme8n1) 00:22:59.640 Could not set queue depth (nvme9n1) 00:22:59.640 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:59.640 fio-3.35 00:22:59.640 Starting 11 threads 00:23:09.641 00:23:09.641 job0: (groupid=0, jobs=1): err= 0: pid=2014913: Mon Jun 10 12:01:02 2024 00:23:09.641 write: IOPS=792, BW=198MiB/s (208MB/s)(1995MiB/10063msec); 0 zone resets 00:23:09.641 slat (usec): min=27, max=50146, avg=1226.59, stdev=2283.34 00:23:09.641 clat (msec): min=3, max=178, avg=79.47, stdev=20.12 00:23:09.641 lat (msec): min=4, max=178, avg=80.69, stdev=20.35 00:23:09.641 clat percentiles (msec): 00:23:09.641 | 1.00th=[ 36], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 65], 00:23:09.641 | 30.00th=[ 68], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 82], 00:23:09.641 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 102], 95.00th=[ 121], 00:23:09.641 | 99.00th=[ 148], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 174], 00:23:09.641 | 99.99th=[ 178] 00:23:09.641 bw ( KiB/s): min=118272, max=271872, per=11.45%, avg=202633.65, stdev=36008.60, samples=20 00:23:09.641 iops : min= 462, max= 1062, avg=791.50, stdev=140.71, samples=20 00:23:09.641 lat (msec) : 4=0.01%, 10=0.04%, 20=0.21%, 50=2.01%, 100=85.84% 00:23:09.641 lat (msec) : 250=11.89% 00:23:09.641 cpu : usr=1.81%, sys=2.66%, ctx=2163, majf=0, minf=1 00:23:09.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:09.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.641 issued rwts: total=0,7979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.641 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.641 job1: (groupid=0, jobs=1): err= 0: pid=2014944: Mon Jun 10 12:01:02 2024 00:23:09.641 write: IOPS=665, BW=166MiB/s (174MB/s)(1681MiB/10111msec); 0 zone resets 00:23:09.641 slat (usec): min=23, max=37266, avg=1359.61, stdev=2713.05 00:23:09.641 clat (msec): min=3, max=235, avg=94.83, stdev=27.47 00:23:09.641 lat (msec): min=3, max=235, avg=96.19, stdev=27.80 00:23:09.641 clat percentiles (msec): 00:23:09.641 | 1.00th=[ 21], 5.00th=[ 46], 10.00th=[ 75], 20.00th=[ 82], 00:23:09.641 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 95], 00:23:09.641 | 70.00th=[ 102], 80.00th=[ 117], 90.00th=[ 131], 95.00th=[ 142], 00:23:09.641 | 99.00th=[ 167], 99.50th=[ 182], 99.90th=[ 226], 99.95th=[ 232], 00:23:09.641 | 99.99th=[ 236] 00:23:09.641 bw ( KiB/s): min=114688, max=274432, per=9.64%, avg=170562.50, stdev=37152.07, samples=20 00:23:09.641 iops : min= 448, max= 1072, avg=666.25, stdev=145.13, samples=20 00:23:09.641 lat (msec) : 4=0.04%, 10=0.27%, 20=0.61%, 50=4.67%, 100=61.23% 00:23:09.641 lat (msec) : 250=33.17% 00:23:09.642 cpu : usr=1.47%, sys=2.39%, ctx=2332, majf=0, minf=1 00:23:09.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.642 issued rwts: total=0,6725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.642 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.642 job2: (groupid=0, jobs=1): err= 0: pid=2014970: Mon Jun 10 12:01:02 2024 00:23:09.642 write: IOPS=566, BW=142MiB/s (148MB/s)(1432MiB/10115msec); 0 zone resets 00:23:09.642 slat (usec): min=25, max=34875, avg=1657.60, stdev=3277.13 00:23:09.642 clat (msec): min=3, max=229, avg=111.31, stdev=34.55 00:23:09.642 lat (msec): min=3, max=229, avg=112.96, stdev=35.03 00:23:09.642 clat percentiles (msec): 00:23:09.642 | 1.00th=[ 25], 5.00th=[ 43], 10.00th=[ 62], 20.00th=[ 83], 00:23:09.642 | 30.00th=[ 92], 40.00th=[ 114], 50.00th=[ 126], 60.00th=[ 128], 00:23:09.642 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 146], 95.00th=[ 150], 00:23:09.642 | 99.00th=[ 165], 99.50th=[ 178], 99.90th=[ 224], 99.95th=[ 224], 00:23:09.642 | 99.99th=[ 230] 00:23:09.642 bw ( KiB/s): min=108544, max=288256, per=8.19%, avg=145012.75, stdev=45722.60, samples=20 00:23:09.642 iops : min= 424, max= 1126, avg=566.45, stdev=178.60, samples=20 00:23:09.642 lat (msec) : 4=0.03%, 10=0.35%, 20=0.38%, 50=6.55%, 100=25.34% 00:23:09.642 lat (msec) : 250=67.35% 00:23:09.642 cpu : usr=1.32%, sys=2.03%, ctx=1919, majf=0, minf=1 00:23:09.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.642 issued rwts: total=0,5727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.642 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.642 job3: (groupid=0, jobs=1): err= 0: pid=2014990: Mon Jun 10 12:01:02 2024 00:23:09.642 write: IOPS=778, BW=195MiB/s (204MB/s)(1959MiB/10065msec); 0 zone resets 00:23:09.642 slat (usec): min=22, max=60591, avg=1183.57, stdev=2665.88 00:23:09.642 clat (msec): min=3, max=184, avg=80.99, stdev=29.13 00:23:09.642 lat (msec): min=4, max=188, avg=82.17, stdev=29.54 00:23:09.642 clat percentiles (msec): 00:23:09.642 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 58], 20.00th=[ 63], 00:23:09.642 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 85], 00:23:09.642 | 70.00th=[ 89], 80.00th=[ 104], 90.00th=[ 126], 95.00th=[ 136], 00:23:09.642 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 184], 00:23:09.642 | 99.99th=[ 186] 00:23:09.642 bw ( KiB/s): min=123392, max=285184, per=11.24%, avg=198976.05, stdev=47722.79, samples=20 00:23:09.642 iops : min= 482, max= 1114, avg=777.25, stdev=186.42, samples=20 00:23:09.642 lat (msec) : 4=0.01%, 10=0.79%, 20=1.71%, 50=5.72%, 100=69.59% 00:23:09.642 lat (msec) : 250=22.18% 00:23:09.642 cpu : usr=1.68%, sys=2.33%, ctx=2612, majf=0, minf=1 00:23:09.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.642 issued rwts: total=0,7835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.642 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.642 job4: (groupid=0, jobs=1): err= 0: pid=2014998: Mon Jun 10 12:01:02 2024 00:23:09.642 write: IOPS=680, BW=170MiB/s (178MB/s)(1720MiB/10110msec); 0 zone resets 00:23:09.642 slat (usec): min=21, max=50955, avg=1383.22, stdev=3099.18 00:23:09.642 clat (msec): min=3, max=237, avg=92.65, stdev=46.94 00:23:09.642 lat (msec): min=3, max=237, avg=94.04, stdev=47.61 00:23:09.642 clat percentiles (msec): 00:23:09.642 | 1.00th=[ 12], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 50], 00:23:09.642 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 66], 60.00th=[ 123], 00:23:09.642 | 70.00th=[ 134], 80.00th=[ 142], 90.00th=[ 155], 95.00th=[ 163], 00:23:09.642 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 228], 99.95th=[ 232], 00:23:09.642 | 99.99th=[ 239] 00:23:09.642 bw ( KiB/s): min=102400, max=310784, per=9.86%, avg=174464.00, stdev=80703.71, samples=20 00:23:09.642 iops : min= 400, max= 1214, avg=681.50, stdev=315.25, samples=20 00:23:09.642 lat (msec) : 4=0.04%, 10=0.70%, 20=1.29%, 50=18.46%, 100=33.27% 00:23:09.642 lat (msec) : 250=46.23% 00:23:09.642 cpu : usr=1.52%, sys=2.12%, ctx=2223, majf=0, minf=1 00:23:09.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.642 issued rwts: total=0,6878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.642 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.642 job5: (groupid=0, jobs=1): err= 0: pid=2015028: Mon Jun 10 12:01:02 2024 00:23:09.642 write: IOPS=531, BW=133MiB/s (139MB/s)(1345MiB/10121msec); 0 zone resets 00:23:09.642 slat (usec): min=16, max=26932, avg=1802.64, stdev=3346.52 00:23:09.642 clat (msec): min=3, max=244, avg=118.53, stdev=30.58 00:23:09.642 lat (msec): min=4, max=244, avg=120.33, stdev=30.95 00:23:09.642 clat percentiles (msec): 00:23:09.642 | 1.00th=[ 20], 5.00th=[ 57], 10.00th=[ 81], 20.00th=[ 89], 00:23:09.642 | 30.00th=[ 109], 40.00th=[ 124], 50.00th=[ 128], 60.00th=[ 130], 00:23:09.642 | 70.00th=[ 133], 80.00th=[ 142], 90.00th=[ 150], 95.00th=[ 155], 00:23:09.642 | 99.00th=[ 165], 99.50th=[ 188], 99.90th=[ 236], 99.95th=[ 236], 00:23:09.642 | 99.99th=[ 245] 00:23:09.642 bw ( KiB/s): min=104448, max=227328, per=7.69%, avg=136140.80, stdev=32957.72, samples=20 00:23:09.642 iops : min= 408, max= 888, avg=531.80, stdev=128.74, samples=20 00:23:09.642 lat (msec) : 4=0.02%, 10=0.24%, 20=0.78%, 50=2.66%, 100=19.57% 00:23:09.642 lat (msec) : 250=76.73% 00:23:09.642 cpu : usr=1.09%, sys=1.62%, ctx=1604, majf=0, minf=1 00:23:09.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.642 issued rwts: total=0,5381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.642 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.642 job6: (groupid=0, jobs=1): err= 0: pid=2015040: Mon Jun 10 12:01:02 2024 00:23:09.642 write: IOPS=663, BW=166MiB/s (174MB/s)(1679MiB/10120msec); 0 zone resets 00:23:09.642 slat (usec): min=24, max=74816, avg=1484.12, stdev=2858.46 00:23:09.642 clat (msec): min=12, max=243, avg=94.84, stdev=24.02 00:23:09.642 lat (msec): min=12, max=243, avg=96.32, stdev=24.24 00:23:09.642 clat percentiles (msec): 00:23:09.642 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 71], 20.00th=[ 74], 00:23:09.642 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 89], 00:23:09.642 | 70.00th=[ 108], 80.00th=[ 123], 90.00th=[ 131], 95.00th=[ 133], 00:23:09.642 | 99.00th=[ 146], 99.50th=[ 178], 99.90th=[ 236], 99.95th=[ 236], 00:23:09.642 | 99.99th=[ 245] 00:23:09.642 bw ( KiB/s): min=119022, max=230912, per=9.62%, avg=170303.10, stdev=38857.62, samples=20 00:23:09.642 iops : min= 464, max= 902, avg=665.20, stdev=151.85, samples=20 00:23:09.642 lat (msec) : 20=0.03%, 100=64.13%, 250=35.85% 00:23:09.642 cpu : usr=1.75%, sys=1.97%, ctx=1717, majf=0, minf=1 00:23:09.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.642 issued rwts: total=0,6715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.642 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.642 job7: (groupid=0, jobs=1): err= 0: pid=2015051: Mon Jun 10 12:01:02 2024 00:23:09.642 write: IOPS=681, BW=170MiB/s (179MB/s)(1724MiB/10118msec); 0 zone resets 00:23:09.642 slat (usec): min=22, max=79910, avg=1343.58, stdev=2988.52 00:23:09.642 clat (msec): min=3, max=248, avg=92.48, stdev=32.22 00:23:09.642 lat (msec): min=3, max=248, avg=93.83, stdev=32.55 00:23:09.642 clat percentiles (msec): 00:23:09.642 | 1.00th=[ 13], 5.00th=[ 46], 10.00th=[ 63], 20.00th=[ 77], 00:23:09.642 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 86], 00:23:09.642 | 70.00th=[ 102], 80.00th=[ 124], 90.00th=[ 133], 95.00th=[ 157], 00:23:09.642 | 99.00th=[ 176], 99.50th=[ 190], 99.90th=[ 232], 99.95th=[ 241], 00:23:09.642 | 99.99th=[ 249] 00:23:09.642 bw ( KiB/s): min=96256, max=235520, per=9.88%, avg=174965.20, stdev=40689.90, samples=20 00:23:09.642 iops : min= 376, max= 920, avg=683.45, stdev=158.95, samples=20 00:23:09.642 lat (msec) : 4=0.04%, 10=0.61%, 20=1.16%, 50=3.84%, 100=62.36% 00:23:09.642 lat (msec) : 250=31.98% 00:23:09.642 cpu : usr=1.61%, sys=1.90%, ctx=2207, majf=0, minf=1 00:23:09.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.642 issued rwts: total=0,6897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.642 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.642 job8: (groupid=0, jobs=1): err= 0: pid=2015086: Mon Jun 10 12:01:02 2024 00:23:09.642 write: IOPS=516, BW=129MiB/s (135MB/s)(1306MiB/10111msec); 0 zone resets 00:23:09.642 slat (usec): min=25, max=39861, avg=1833.41, stdev=3568.36 00:23:09.642 clat (msec): min=3, max=238, avg=121.99, stdev=32.87 00:23:09.642 lat (msec): min=4, max=238, avg=123.82, stdev=33.28 00:23:09.642 clat percentiles (msec): 00:23:09.642 | 1.00th=[ 18], 5.00th=[ 54], 10.00th=[ 93], 20.00th=[ 102], 00:23:09.642 | 30.00th=[ 111], 40.00th=[ 123], 50.00th=[ 128], 60.00th=[ 130], 00:23:09.642 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 159], 95.00th=[ 174], 00:23:09.642 | 99.00th=[ 186], 99.50th=[ 197], 99.90th=[ 224], 99.95th=[ 226], 00:23:09.642 | 99.99th=[ 239] 00:23:09.642 bw ( KiB/s): min=92160, max=182272, per=7.46%, avg=132111.65, stdev=24471.09, samples=20 00:23:09.642 iops : min= 360, max= 712, avg=516.05, stdev=95.58, samples=20 00:23:09.642 lat (msec) : 4=0.02%, 10=0.36%, 20=0.78%, 50=3.56%, 100=13.63% 00:23:09.642 lat (msec) : 250=81.64% 00:23:09.642 cpu : usr=1.22%, sys=1.44%, ctx=1672, majf=0, minf=1 00:23:09.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.643 issued rwts: total=0,5223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.643 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.643 job9: (groupid=0, jobs=1): err= 0: pid=2015104: Mon Jun 10 12:01:02 2024 00:23:09.643 write: IOPS=505, BW=126MiB/s (133MB/s)(1280MiB/10119msec); 0 zone resets 00:23:09.643 slat (usec): min=21, max=75563, avg=1908.78, stdev=4226.95 00:23:09.643 clat (msec): min=6, max=246, avg=124.55, stdev=24.14 00:23:09.643 lat (msec): min=6, max=246, avg=126.46, stdev=24.31 00:23:09.643 clat percentiles (msec): 00:23:09.643 | 1.00th=[ 42], 5.00th=[ 78], 10.00th=[ 97], 20.00th=[ 108], 00:23:09.643 | 30.00th=[ 121], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 132], 00:23:09.643 | 70.00th=[ 136], 80.00th=[ 142], 90.00th=[ 148], 95.00th=[ 155], 00:23:09.643 | 99.00th=[ 169], 99.50th=[ 199], 99.90th=[ 239], 99.95th=[ 239], 00:23:09.643 | 99.99th=[ 247] 00:23:09.643 bw ( KiB/s): min=110592, max=182784, per=7.31%, avg=129395.50, stdev=18897.16, samples=20 00:23:09.643 iops : min= 432, max= 714, avg=505.45, stdev=73.82, samples=20 00:23:09.643 lat (msec) : 10=0.08%, 20=0.29%, 50=1.17%, 100=10.34%, 250=88.12% 00:23:09.643 cpu : usr=1.04%, sys=1.56%, ctx=1417, majf=0, minf=1 00:23:09.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.643 issued rwts: total=0,5118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.643 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.643 job10: (groupid=0, jobs=1): err= 0: pid=2015117: Mon Jun 10 12:01:02 2024 00:23:09.643 write: IOPS=543, BW=136MiB/s (143MB/s)(1375MiB/10114msec); 0 zone resets 00:23:09.643 slat (usec): min=22, max=53030, avg=1767.42, stdev=3436.80 00:23:09.643 clat (msec): min=2, max=248, avg=115.86, stdev=36.10 00:23:09.643 lat (msec): min=2, max=248, avg=117.62, stdev=36.53 00:23:09.643 clat percentiles (msec): 00:23:09.643 | 1.00th=[ 15], 5.00th=[ 42], 10.00th=[ 78], 20.00th=[ 87], 00:23:09.643 | 30.00th=[ 107], 40.00th=[ 118], 50.00th=[ 125], 60.00th=[ 128], 00:23:09.643 | 70.00th=[ 133], 80.00th=[ 142], 90.00th=[ 153], 95.00th=[ 165], 00:23:09.643 | 99.00th=[ 194], 99.50th=[ 199], 99.90th=[ 241], 99.95th=[ 241], 00:23:09.643 | 99.99th=[ 249] 00:23:09.643 bw ( KiB/s): min=96256, max=221696, per=7.87%, avg=139228.15, stdev=34581.77, samples=20 00:23:09.643 iops : min= 376, max= 866, avg=543.85, stdev=135.08, samples=20 00:23:09.643 lat (msec) : 4=0.11%, 10=0.56%, 20=0.84%, 50=7.45%, 100=17.67% 00:23:09.643 lat (msec) : 250=73.37% 00:23:09.643 cpu : usr=1.27%, sys=1.52%, ctx=1652, majf=0, minf=1 00:23:09.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:09.643 issued rwts: total=0,5501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.643 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:09.643 00:23:09.643 Run status group 0 (all jobs): 00:23:09.643 WRITE: bw=1729MiB/s (1813MB/s), 126MiB/s-198MiB/s (133MB/s-208MB/s), io=17.1GiB (18.3GB), run=10063-10121msec 00:23:09.643 00:23:09.643 Disk stats (read/write): 00:23:09.643 nvme0n1: ios=49/15952, merge=0/0, ticks=289/1225903, in_queue=1226192, util=96.97% 00:23:09.643 nvme10n1: ios=48/13392, merge=0/0, ticks=152/1226753, in_queue=1226905, util=97.31% 00:23:09.643 nvme1n1: ios=42/11387, merge=0/0, ticks=980/1222989, in_queue=1223969, util=99.93% 00:23:09.643 nvme2n1: ios=43/15660, merge=0/0, ticks=847/1230148, in_queue=1230995, util=99.94% 00:23:09.643 nvme3n1: ios=0/13702, merge=0/0, ticks=0/1223842, in_queue=1223842, util=97.16% 00:23:09.643 nvme4n1: ios=45/10693, merge=0/0, ticks=249/1223729, in_queue=1223978, util=99.55% 00:23:09.643 nvme5n1: ios=44/13363, merge=0/0, ticks=1098/1219503, in_queue=1220601, util=99.98% 00:23:09.643 nvme6n1: ios=47/13731, merge=0/0, ticks=2717/1216826, in_queue=1219543, util=99.98% 00:23:09.643 nvme7n1: ios=39/10388, merge=0/0, ticks=1039/1224401, in_queue=1225440, util=99.96% 00:23:09.643 nvme8n1: ios=44/10171, merge=0/0, ticks=2133/1208465, in_queue=1210598, util=99.95% 00:23:09.643 nvme9n1: ios=0/10939, merge=0/0, ticks=0/1223671, in_queue=1223671, util=99.11% 00:23:09.643 12:01:02 -- target/multiconnection.sh@36 -- # sync 00:23:09.643 12:01:02 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:09.643 12:01:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.643 12:01:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:09.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:09.643 12:01:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:09.643 12:01:03 -- common/autotest_common.sh@1198 -- # local i=0 00:23:09.643 12:01:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:09.643 12:01:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:23:09.643 12:01:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:09.643 12:01:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:09.643 12:01:03 -- common/autotest_common.sh@1210 -- # return 0 00:23:09.643 12:01:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.643 12:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.643 12:01:03 -- common/autotest_common.sh@10 -- # set +x 00:23:09.643 12:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.643 12:01:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.643 12:01:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:09.904 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:09.904 12:01:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:09.904 12:01:03 -- common/autotest_common.sh@1198 -- # local i=0 00:23:09.904 12:01:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:09.904 12:01:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:23:09.904 12:01:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:09.904 12:01:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:09.904 12:01:03 -- common/autotest_common.sh@1210 -- # return 0 00:23:09.904 12:01:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:09.904 12:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.904 12:01:03 -- common/autotest_common.sh@10 -- # set +x 00:23:09.904 12:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.904 12:01:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.904 12:01:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:10.166 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:10.166 12:01:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:10.166 12:01:03 -- common/autotest_common.sh@1198 -- # local i=0 00:23:10.166 12:01:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:10.166 12:01:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:23:10.166 12:01:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:10.166 12:01:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:10.166 12:01:03 -- common/autotest_common.sh@1210 -- # return 0 00:23:10.166 12:01:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:10.166 12:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.166 12:01:03 -- common/autotest_common.sh@10 -- # set +x 00:23:10.166 12:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.166 12:01:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:10.166 12:01:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:10.428 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:10.428 12:01:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:10.428 12:01:04 -- common/autotest_common.sh@1198 -- # local i=0 00:23:10.428 12:01:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:10.428 12:01:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:23:10.428 12:01:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:10.428 12:01:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:10.428 12:01:04 -- common/autotest_common.sh@1210 -- # return 0 00:23:10.428 12:01:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:10.428 12:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.428 12:01:04 -- common/autotest_common.sh@10 -- # set +x 00:23:10.428 12:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.428 12:01:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:10.428 12:01:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:10.689 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:10.689 12:01:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:10.689 12:01:04 -- common/autotest_common.sh@1198 -- # local i=0 00:23:10.689 12:01:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:10.689 12:01:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:23:10.689 12:01:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:10.689 12:01:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:10.689 12:01:04 -- common/autotest_common.sh@1210 -- # return 0 00:23:10.689 12:01:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:10.689 12:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.689 12:01:04 -- common/autotest_common.sh@10 -- # set +x 00:23:10.689 12:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.689 12:01:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:10.689 12:01:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:10.950 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:10.950 12:01:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:10.950 12:01:04 -- common/autotest_common.sh@1198 -- # local i=0 00:23:10.950 12:01:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:10.950 12:01:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:23:10.950 12:01:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:10.950 12:01:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:10.950 12:01:04 -- common/autotest_common.sh@1210 -- # return 0 00:23:10.950 12:01:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:10.950 12:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.950 12:01:04 -- common/autotest_common.sh@10 -- # set +x 00:23:10.950 12:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.950 12:01:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:10.950 12:01:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:11.212 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:11.212 12:01:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:11.212 12:01:04 -- common/autotest_common.sh@1198 -- # local i=0 00:23:11.212 12:01:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:11.212 12:01:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:23:11.212 12:01:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:11.212 12:01:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:11.212 12:01:04 -- common/autotest_common.sh@1210 -- # return 0 00:23:11.212 12:01:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:11.212 12:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.212 12:01:04 -- common/autotest_common.sh@10 -- # set +x 00:23:11.212 12:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.212 12:01:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:11.212 12:01:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:11.212 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:11.212 12:01:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:11.212 12:01:04 -- common/autotest_common.sh@1198 -- # local i=0 00:23:11.212 12:01:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:11.212 12:01:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:23:11.212 12:01:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:11.212 12:01:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:11.212 12:01:04 -- common/autotest_common.sh@1210 -- # return 0 00:23:11.212 12:01:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:11.212 12:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.212 12:01:04 -- common/autotest_common.sh@10 -- # set +x 00:23:11.212 12:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.212 12:01:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:11.212 12:01:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:11.475 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:11.475 12:01:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:11.475 12:01:05 -- common/autotest_common.sh@1198 -- # local i=0 00:23:11.475 12:01:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:11.475 12:01:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:23:11.475 12:01:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:11.475 12:01:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:11.475 12:01:05 -- common/autotest_common.sh@1210 -- # return 0 00:23:11.475 12:01:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:11.475 12:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.475 12:01:05 -- common/autotest_common.sh@10 -- # set +x 00:23:11.475 12:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.475 12:01:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:11.475 12:01:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:11.475 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:11.475 12:01:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:11.475 12:01:05 -- common/autotest_common.sh@1198 -- # local i=0 00:23:11.475 12:01:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:11.475 12:01:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:23:11.736 12:01:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:11.736 12:01:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:11.736 12:01:05 -- common/autotest_common.sh@1210 -- # return 0 00:23:11.736 12:01:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:11.736 12:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.736 12:01:05 -- common/autotest_common.sh@10 -- # set +x 00:23:11.736 12:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.736 12:01:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:11.736 12:01:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:11.736 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:11.736 12:01:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:11.736 12:01:05 -- common/autotest_common.sh@1198 -- # local i=0 00:23:11.736 12:01:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:11.736 12:01:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:23:11.736 12:01:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:11.736 12:01:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:11.736 12:01:05 -- common/autotest_common.sh@1210 -- # return 0 00:23:11.736 12:01:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:11.736 12:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.736 12:01:05 -- common/autotest_common.sh@10 -- # set +x 00:23:11.736 12:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.736 12:01:05 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:11.736 12:01:05 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:11.736 12:01:05 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:11.736 12:01:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:11.736 12:01:05 -- nvmf/common.sh@116 -- # sync 00:23:11.736 12:01:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:11.736 12:01:05 -- nvmf/common.sh@119 -- # set +e 00:23:11.736 12:01:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:11.736 12:01:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:11.736 rmmod nvme_tcp 00:23:11.736 rmmod nvme_fabrics 00:23:11.736 rmmod nvme_keyring 00:23:11.736 12:01:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:11.736 12:01:05 -- nvmf/common.sh@123 -- # set -e 00:23:11.736 12:01:05 -- nvmf/common.sh@124 -- # return 0 00:23:11.736 12:01:05 -- nvmf/common.sh@477 -- # '[' -n 2003446 ']' 00:23:11.736 12:01:05 -- nvmf/common.sh@478 -- # killprocess 2003446 00:23:11.736 12:01:05 -- common/autotest_common.sh@926 -- # '[' -z 2003446 ']' 00:23:11.736 12:01:05 -- common/autotest_common.sh@930 -- # kill -0 2003446 00:23:11.736 12:01:05 -- common/autotest_common.sh@931 -- # uname 00:23:11.736 12:01:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:11.736 12:01:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2003446 00:23:11.997 12:01:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:11.997 12:01:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:11.997 12:01:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2003446' 00:23:11.997 killing process with pid 2003446 00:23:11.997 12:01:05 -- common/autotest_common.sh@945 -- # kill 2003446 00:23:11.997 12:01:05 -- common/autotest_common.sh@950 -- # wait 2003446 00:23:12.258 12:01:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:12.258 12:01:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:12.258 12:01:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:12.258 12:01:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:12.258 12:01:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:12.258 12:01:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.258 12:01:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.258 12:01:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.170 12:01:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:14.170 00:23:14.170 real 1m16.548s 00:23:14.170 user 4m48.889s 00:23:14.170 sys 0m21.662s 00:23:14.170 12:01:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.170 12:01:07 -- common/autotest_common.sh@10 -- # set +x 00:23:14.170 ************************************ 00:23:14.170 END TEST nvmf_multiconnection 00:23:14.170 ************************************ 00:23:14.170 12:01:07 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:14.170 12:01:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:14.170 12:01:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:14.170 12:01:07 -- common/autotest_common.sh@10 -- # set +x 00:23:14.170 ************************************ 00:23:14.170 START TEST nvmf_initiator_timeout 00:23:14.170 ************************************ 00:23:14.170 12:01:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:14.430 * Looking for test storage... 00:23:14.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:14.430 12:01:08 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.430 12:01:08 -- nvmf/common.sh@7 -- # uname -s 00:23:14.430 12:01:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.430 12:01:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.430 12:01:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.430 12:01:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.430 12:01:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.430 12:01:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.430 12:01:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.430 12:01:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.430 12:01:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.430 12:01:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.430 12:01:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:14.430 12:01:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:14.430 12:01:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.430 12:01:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.430 12:01:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.430 12:01:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.430 12:01:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.430 12:01:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.430 12:01:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.430 12:01:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.430 12:01:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.430 12:01:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.430 12:01:08 -- paths/export.sh@5 -- # export PATH 00:23:14.430 12:01:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.431 12:01:08 -- nvmf/common.sh@46 -- # : 0 00:23:14.431 12:01:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:14.431 12:01:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:14.431 12:01:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:14.431 12:01:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.431 12:01:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.431 12:01:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:14.431 12:01:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:14.431 12:01:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:14.431 12:01:08 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:14.431 12:01:08 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:14.431 12:01:08 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:14.431 12:01:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:14.431 12:01:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.431 12:01:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:14.431 12:01:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:14.431 12:01:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:14.431 12:01:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.431 12:01:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.431 12:01:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.431 12:01:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:14.431 12:01:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:14.431 12:01:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:14.431 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:23:22.570 12:01:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:22.570 12:01:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:22.570 12:01:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:22.570 12:01:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:22.570 12:01:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:22.570 12:01:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:22.570 12:01:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:22.570 12:01:15 -- nvmf/common.sh@294 -- # net_devs=() 00:23:22.570 12:01:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:22.570 12:01:15 -- nvmf/common.sh@295 -- # e810=() 00:23:22.570 12:01:15 -- nvmf/common.sh@295 -- # local -ga e810 00:23:22.570 12:01:15 -- nvmf/common.sh@296 -- # x722=() 00:23:22.570 12:01:15 -- nvmf/common.sh@296 -- # local -ga x722 00:23:22.570 12:01:15 -- nvmf/common.sh@297 -- # mlx=() 00:23:22.570 12:01:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:22.570 12:01:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.570 12:01:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.570 12:01:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.570 12:01:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.570 12:01:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.570 12:01:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.570 12:01:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.570 12:01:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.570 12:01:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.570 12:01:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.571 12:01:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.571 12:01:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:22.571 12:01:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:22.571 12:01:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:22.571 12:01:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:22.571 12:01:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:22.571 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:22.571 12:01:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:22.571 12:01:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:22.571 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:22.571 12:01:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:22.571 12:01:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:22.571 12:01:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.571 12:01:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:22.571 12:01:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.571 12:01:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:22.571 Found net devices under 0000:31:00.0: cvl_0_0 00:23:22.571 12:01:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.571 12:01:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:22.571 12:01:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.571 12:01:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:22.571 12:01:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.571 12:01:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:22.571 Found net devices under 0000:31:00.1: cvl_0_1 00:23:22.571 12:01:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.571 12:01:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:22.571 12:01:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:22.571 12:01:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:22.571 12:01:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.571 12:01:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.571 12:01:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.571 12:01:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:22.571 12:01:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.571 12:01:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.571 12:01:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:22.571 12:01:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.571 12:01:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.571 12:01:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:22.571 12:01:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:22.571 12:01:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.571 12:01:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.571 12:01:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.571 12:01:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.571 12:01:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:22.571 12:01:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.571 12:01:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.571 12:01:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.571 12:01:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:22.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:23:22.571 00:23:22.571 --- 10.0.0.2 ping statistics --- 00:23:22.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.571 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:23:22.571 12:01:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:23:22.571 00:23:22.571 --- 10.0.0.1 ping statistics --- 00:23:22.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.571 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:23:22.571 12:01:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.571 12:01:15 -- nvmf/common.sh@410 -- # return 0 00:23:22.571 12:01:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:22.571 12:01:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.571 12:01:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:22.571 12:01:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.571 12:01:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:22.571 12:01:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:22.571 12:01:15 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:22.571 12:01:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:22.571 12:01:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:22.571 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:23:22.571 12:01:15 -- nvmf/common.sh@469 -- # nvmfpid=2021913 00:23:22.571 12:01:15 -- nvmf/common.sh@470 -- # waitforlisten 2021913 00:23:22.571 12:01:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:22.571 12:01:15 -- common/autotest_common.sh@819 -- # '[' -z 2021913 ']' 00:23:22.571 12:01:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.571 12:01:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:22.571 12:01:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.571 12:01:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:22.571 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:23:22.571 [2024-06-10 12:01:15.454702] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:22.571 [2024-06-10 12:01:15.454804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.571 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.571 [2024-06-10 12:01:15.527693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:22.571 [2024-06-10 12:01:15.600646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:22.571 [2024-06-10 12:01:15.600780] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.571 [2024-06-10 12:01:15.600790] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.571 [2024-06-10 12:01:15.600798] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.571 [2024-06-10 12:01:15.600952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.571 [2024-06-10 12:01:15.601086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.571 [2024-06-10 12:01:15.601249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.571 [2024-06-10 12:01:15.601262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.571 12:01:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:22.571 12:01:16 -- common/autotest_common.sh@852 -- # return 0 00:23:22.571 12:01:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:22.571 12:01:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:22.571 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:23:22.571 12:01:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.571 12:01:16 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:22.571 12:01:16 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:22.571 12:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:22.571 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:23:22.571 Malloc0 00:23:22.571 12:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:22.571 12:01:16 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:22.571 12:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:22.571 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:23:22.571 Delay0 00:23:22.571 12:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:22.571 12:01:16 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:22.571 12:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:22.571 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:23:22.571 [2024-06-10 12:01:16.301530] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.571 12:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:22.571 12:01:16 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:22.571 12:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:22.571 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:23:22.571 12:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:22.571 12:01:16 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:22.571 12:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:22.571 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:23:22.571 12:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:22.571 12:01:16 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.571 12:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:22.572 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:23:22.832 [2024-06-10 12:01:16.341790] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.832 12:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:22.832 12:01:16 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:24.216 12:01:17 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:24.216 12:01:17 -- common/autotest_common.sh@1177 -- # local i=0 00:23:24.216 12:01:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:24.216 12:01:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:24.217 12:01:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:26.130 12:01:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:26.130 12:01:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:26.130 12:01:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:26.130 12:01:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:26.130 12:01:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:26.130 12:01:19 -- common/autotest_common.sh@1187 -- # return 0 00:23:26.130 12:01:19 -- target/initiator_timeout.sh@35 -- # fio_pid=2022769 00:23:26.130 12:01:19 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:26.130 12:01:19 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:26.130 [global] 00:23:26.130 thread=1 00:23:26.130 invalidate=1 00:23:26.130 rw=write 00:23:26.130 time_based=1 00:23:26.130 runtime=60 00:23:26.130 ioengine=libaio 00:23:26.130 direct=1 00:23:26.130 bs=4096 00:23:26.130 iodepth=1 00:23:26.130 norandommap=0 00:23:26.130 numjobs=1 00:23:26.130 00:23:26.130 verify_dump=1 00:23:26.130 verify_backlog=512 00:23:26.130 verify_state_save=0 00:23:26.130 do_verify=1 00:23:26.130 verify=crc32c-intel 00:23:26.130 [job0] 00:23:26.130 filename=/dev/nvme0n1 00:23:26.130 Could not set queue depth (nvme0n1) 00:23:26.698 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:26.698 fio-3.35 00:23:26.698 Starting 1 thread 00:23:29.321 12:01:22 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:29.321 12:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.321 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:23:29.321 true 00:23:29.321 12:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.321 12:01:22 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:29.321 12:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.321 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:23:29.321 true 00:23:29.321 12:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.321 12:01:22 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:29.321 12:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.321 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:23:29.321 true 00:23:29.321 12:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.321 12:01:22 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:29.321 12:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.321 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:23:29.321 true 00:23:29.321 12:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.321 12:01:22 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:32.621 12:01:25 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:32.621 12:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.621 12:01:25 -- common/autotest_common.sh@10 -- # set +x 00:23:32.621 true 00:23:32.621 12:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.621 12:01:25 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:32.621 12:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.621 12:01:25 -- common/autotest_common.sh@10 -- # set +x 00:23:32.621 true 00:23:32.621 12:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.621 12:01:25 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:32.621 12:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.621 12:01:25 -- common/autotest_common.sh@10 -- # set +x 00:23:32.621 true 00:23:32.621 12:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.621 12:01:25 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:32.621 12:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.621 12:01:25 -- common/autotest_common.sh@10 -- # set +x 00:23:32.621 true 00:23:32.621 12:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.621 12:01:25 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:32.621 12:01:25 -- target/initiator_timeout.sh@54 -- # wait 2022769 00:24:28.883 00:24:28.883 job0: (groupid=0, jobs=1): err= 0: pid=2023115: Mon Jun 10 12:02:20 2024 00:24:28.883 read: IOPS=162, BW=649KiB/s (664kB/s)(38.0MiB/60001msec) 00:24:28.883 slat (usec): min=6, max=213, avg=25.14, stdev= 3.73 00:24:28.883 clat (usec): min=466, max=41793k, avg=5535.99, stdev=423736.64 00:24:28.883 lat (usec): min=491, max=41793k, avg=5561.12, stdev=423736.65 00:24:28.883 clat percentiles (usec): 00:24:28.883 | 1.00th=[ 668], 5.00th=[ 750], 10.00th=[ 816], 00:24:28.883 | 20.00th=[ 873], 30.00th=[ 898], 40.00th=[ 930], 00:24:28.883 | 50.00th=[ 979], 60.00th=[ 1004], 70.00th=[ 1012], 00:24:28.883 | 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1057], 00:24:28.883 | 99.00th=[ 1139], 99.50th=[ 42206], 99.90th=[ 42206], 00:24:28.883 | 99.95th=[ 42206], 99.99th=[17112761] 00:24:28.883 write: IOPS=168, BW=675KiB/s (691kB/s)(39.5MiB/60001msec); 0 zone resets 00:24:28.883 slat (usec): min=9, max=33036, avg=32.44, stdev=328.20 00:24:28.883 clat (usec): min=199, max=1124, avg=534.60, stdev=97.68 00:24:28.883 lat (usec): min=209, max=33808, avg=567.04, stdev=345.57 00:24:28.883 clat percentiles (usec): 00:24:28.883 | 1.00th=[ 318], 5.00th=[ 388], 10.00th=[ 416], 20.00th=[ 449], 00:24:28.883 | 30.00th=[ 498], 40.00th=[ 506], 50.00th=[ 519], 60.00th=[ 553], 00:24:28.883 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 652], 95.00th=[ 693], 00:24:28.883 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 865], 99.95th=[ 889], 00:24:28.883 | 99.99th=[ 898] 00:24:28.883 bw ( KiB/s): min= 280, max= 4096, per=100.00%, avg=2683.59, stdev=1340.81, samples=29 00:24:28.883 iops : min= 70, max= 1024, avg=670.90, stdev=335.20, samples=29 00:24:28.883 lat (usec) : 250=0.05%, 500=15.51%, 750=36.72%, 1000=28.08% 00:24:28.883 lat (msec) : 2=19.29%, 50=0.35%, >=2000=0.01% 00:24:28.883 cpu : usr=0.52%, sys=0.93%, ctx=19856, majf=0, minf=1 00:24:28.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.883 issued rwts: total=9728,10123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:28.883 00:24:28.883 Run status group 0 (all jobs): 00:24:28.883 READ: bw=649KiB/s (664kB/s), 649KiB/s-649KiB/s (664kB/s-664kB/s), io=38.0MiB (39.8MB), run=60001-60001msec 00:24:28.883 WRITE: bw=675KiB/s (691kB/s), 675KiB/s-675KiB/s (691kB/s-691kB/s), io=39.5MiB (41.5MB), run=60001-60001msec 00:24:28.883 00:24:28.883 Disk stats (read/write): 00:24:28.883 nvme0n1: ios=9780/9947, merge=0/0, ticks=13468/4860, in_queue=18328, util=99.72% 00:24:28.883 12:02:20 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:28.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:28.883 12:02:20 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:28.883 12:02:20 -- common/autotest_common.sh@1198 -- # local i=0 00:24:28.883 12:02:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:28.883 12:02:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:28.883 12:02:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:28.883 12:02:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:28.883 12:02:20 -- common/autotest_common.sh@1210 -- # return 0 00:24:28.883 12:02:20 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:28.883 12:02:20 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:28.883 nvmf hotplug test: fio successful as expected 00:24:28.883 12:02:20 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.883 12:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.883 12:02:20 -- common/autotest_common.sh@10 -- # set +x 00:24:28.883 12:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.883 12:02:20 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:28.883 12:02:20 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:28.883 12:02:20 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:28.883 12:02:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:28.883 12:02:20 -- nvmf/common.sh@116 -- # sync 00:24:28.883 12:02:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:28.883 12:02:20 -- nvmf/common.sh@119 -- # set +e 00:24:28.883 12:02:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:28.883 12:02:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:28.883 rmmod nvme_tcp 00:24:28.883 rmmod nvme_fabrics 00:24:28.883 rmmod nvme_keyring 00:24:28.883 12:02:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:28.883 12:02:20 -- nvmf/common.sh@123 -- # set -e 00:24:28.883 12:02:20 -- nvmf/common.sh@124 -- # return 0 00:24:28.883 12:02:20 -- nvmf/common.sh@477 -- # '[' -n 2021913 ']' 00:24:28.883 12:02:20 -- nvmf/common.sh@478 -- # killprocess 2021913 00:24:28.883 12:02:20 -- common/autotest_common.sh@926 -- # '[' -z 2021913 ']' 00:24:28.883 12:02:20 -- common/autotest_common.sh@930 -- # kill -0 2021913 00:24:28.883 12:02:20 -- common/autotest_common.sh@931 -- # uname 00:24:28.883 12:02:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:28.883 12:02:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2021913 00:24:28.883 12:02:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:28.883 12:02:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:28.883 12:02:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2021913' 00:24:28.883 killing process with pid 2021913 00:24:28.883 12:02:20 -- common/autotest_common.sh@945 -- # kill 2021913 00:24:28.883 12:02:20 -- common/autotest_common.sh@950 -- # wait 2021913 00:24:28.883 12:02:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:28.883 12:02:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:28.883 12:02:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:28.883 12:02:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.883 12:02:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:28.883 12:02:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.883 12:02:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.883 12:02:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.144 12:02:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:29.144 00:24:29.144 real 1m14.862s 00:24:29.144 user 4m32.049s 00:24:29.144 sys 0m7.708s 00:24:29.144 12:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.144 12:02:22 -- common/autotest_common.sh@10 -- # set +x 00:24:29.144 ************************************ 00:24:29.144 END TEST nvmf_initiator_timeout 00:24:29.144 ************************************ 00:24:29.144 12:02:22 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:29.144 12:02:22 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:29.144 12:02:22 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:29.144 12:02:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:29.144 12:02:22 -- common/autotest_common.sh@10 -- # set +x 00:24:37.286 12:02:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:37.286 12:02:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:37.286 12:02:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:37.286 12:02:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:37.286 12:02:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:37.286 12:02:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:37.286 12:02:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:37.286 12:02:29 -- nvmf/common.sh@294 -- # net_devs=() 00:24:37.286 12:02:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:37.286 12:02:29 -- nvmf/common.sh@295 -- # e810=() 00:24:37.286 12:02:29 -- nvmf/common.sh@295 -- # local -ga e810 00:24:37.286 12:02:29 -- nvmf/common.sh@296 -- # x722=() 00:24:37.286 12:02:29 -- nvmf/common.sh@296 -- # local -ga x722 00:24:37.286 12:02:29 -- nvmf/common.sh@297 -- # mlx=() 00:24:37.286 12:02:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:37.286 12:02:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.286 12:02:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:37.286 12:02:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:37.286 12:02:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:37.286 12:02:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.286 12:02:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:37.286 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:37.286 12:02:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.286 12:02:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:37.286 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:37.286 12:02:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:37.286 12:02:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.286 12:02:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.286 12:02:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.286 12:02:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.286 12:02:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:37.286 Found net devices under 0000:31:00.0: cvl_0_0 00:24:37.286 12:02:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.286 12:02:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.286 12:02:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.286 12:02:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.286 12:02:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.286 12:02:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:37.286 Found net devices under 0000:31:00.1: cvl_0_1 00:24:37.286 12:02:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.286 12:02:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:37.286 12:02:29 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.286 12:02:29 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:37.286 12:02:29 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:37.286 12:02:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:37.286 12:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:37.286 12:02:29 -- common/autotest_common.sh@10 -- # set +x 00:24:37.286 ************************************ 00:24:37.286 START TEST nvmf_perf_adq 00:24:37.286 ************************************ 00:24:37.286 12:02:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:37.286 * Looking for test storage... 00:24:37.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:37.286 12:02:29 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.286 12:02:29 -- nvmf/common.sh@7 -- # uname -s 00:24:37.286 12:02:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.286 12:02:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.286 12:02:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.286 12:02:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.286 12:02:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.286 12:02:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.286 12:02:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.286 12:02:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.286 12:02:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.286 12:02:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.286 12:02:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:37.286 12:02:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:37.286 12:02:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.286 12:02:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.286 12:02:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.286 12:02:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.286 12:02:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.286 12:02:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.286 12:02:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.286 12:02:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.286 12:02:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.286 12:02:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.286 12:02:29 -- paths/export.sh@5 -- # export PATH 00:24:37.286 12:02:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.286 12:02:29 -- nvmf/common.sh@46 -- # : 0 00:24:37.286 12:02:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:37.286 12:02:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:37.286 12:02:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:37.286 12:02:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.286 12:02:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.286 12:02:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:37.286 12:02:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:37.286 12:02:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:37.286 12:02:29 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:37.286 12:02:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:37.286 12:02:29 -- common/autotest_common.sh@10 -- # set +x 00:24:43.877 12:02:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:43.877 12:02:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:43.877 12:02:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:43.877 12:02:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:43.877 12:02:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:43.877 12:02:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:43.877 12:02:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:43.877 12:02:36 -- nvmf/common.sh@294 -- # net_devs=() 00:24:43.877 12:02:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:43.877 12:02:36 -- nvmf/common.sh@295 -- # e810=() 00:24:43.877 12:02:36 -- nvmf/common.sh@295 -- # local -ga e810 00:24:43.877 12:02:36 -- nvmf/common.sh@296 -- # x722=() 00:24:43.877 12:02:36 -- nvmf/common.sh@296 -- # local -ga x722 00:24:43.877 12:02:36 -- nvmf/common.sh@297 -- # mlx=() 00:24:43.877 12:02:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:43.877 12:02:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.877 12:02:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:43.877 12:02:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:43.877 12:02:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:43.877 12:02:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.877 12:02:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:43.877 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:43.877 12:02:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.877 12:02:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:43.877 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:43.877 12:02:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:43.877 12:02:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:43.877 12:02:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.877 12:02:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.877 12:02:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.877 12:02:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.877 12:02:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:43.877 Found net devices under 0000:31:00.0: cvl_0_0 00:24:43.877 12:02:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.877 12:02:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.877 12:02:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.877 12:02:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.877 12:02:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.877 12:02:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:43.877 Found net devices under 0000:31:00.1: cvl_0_1 00:24:43.877 12:02:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.877 12:02:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:43.877 12:02:36 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.877 12:02:36 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:43.877 12:02:36 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:43.877 12:02:36 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:43.877 12:02:36 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:44.820 12:02:38 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:46.729 12:02:40 -- target/perf_adq.sh@54 -- # sleep 5 00:24:52.020 12:02:45 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:52.020 12:02:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:52.020 12:02:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.020 12:02:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:52.020 12:02:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:52.020 12:02:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:52.020 12:02:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.020 12:02:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.020 12:02:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.020 12:02:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:52.020 12:02:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:52.020 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:24:52.020 12:02:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:52.020 12:02:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:52.020 12:02:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:52.020 12:02:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:52.020 12:02:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:52.020 12:02:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:52.020 12:02:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:52.020 12:02:45 -- nvmf/common.sh@294 -- # net_devs=() 00:24:52.020 12:02:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:52.020 12:02:45 -- nvmf/common.sh@295 -- # e810=() 00:24:52.020 12:02:45 -- nvmf/common.sh@295 -- # local -ga e810 00:24:52.020 12:02:45 -- nvmf/common.sh@296 -- # x722=() 00:24:52.020 12:02:45 -- nvmf/common.sh@296 -- # local -ga x722 00:24:52.020 12:02:45 -- nvmf/common.sh@297 -- # mlx=() 00:24:52.020 12:02:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:52.020 12:02:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.020 12:02:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:52.020 12:02:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:52.020 12:02:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:52.020 12:02:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:52.020 12:02:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:52.020 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:52.020 12:02:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:52.020 12:02:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:52.020 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:52.020 12:02:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:52.020 12:02:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.021 12:02:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.021 12:02:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:52.021 12:02:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:52.021 12:02:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:52.021 12:02:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:52.021 12:02:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:52.021 12:02:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.021 12:02:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:52.021 12:02:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.021 12:02:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:52.021 Found net devices under 0000:31:00.0: cvl_0_0 00:24:52.021 12:02:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.021 12:02:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:52.021 12:02:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.021 12:02:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:52.021 12:02:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.021 12:02:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:52.021 Found net devices under 0000:31:00.1: cvl_0_1 00:24:52.021 12:02:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.021 12:02:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:52.021 12:02:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:52.021 12:02:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:52.021 12:02:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:52.021 12:02:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:52.021 12:02:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.021 12:02:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.021 12:02:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.021 12:02:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:52.021 12:02:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.021 12:02:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.021 12:02:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:52.021 12:02:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.021 12:02:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.021 12:02:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:52.021 12:02:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:52.021 12:02:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.021 12:02:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.021 12:02:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.021 12:02:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.021 12:02:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:52.021 12:02:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.021 12:02:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.021 12:02:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.021 12:02:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:52.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:24:52.021 00:24:52.021 --- 10.0.0.2 ping statistics --- 00:24:52.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.021 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:24:52.021 12:02:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.443 ms 00:24:52.021 00:24:52.021 --- 10.0.0.1 ping statistics --- 00:24:52.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.021 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:24:52.021 12:02:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.021 12:02:45 -- nvmf/common.sh@410 -- # return 0 00:24:52.021 12:02:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:52.021 12:02:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.021 12:02:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:52.021 12:02:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:52.021 12:02:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.021 12:02:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:52.021 12:02:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:52.021 12:02:45 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:52.021 12:02:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:52.021 12:02:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:52.021 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:24:52.021 12:02:45 -- nvmf/common.sh@469 -- # nvmfpid=2044435 00:24:52.021 12:02:45 -- nvmf/common.sh@470 -- # waitforlisten 2044435 00:24:52.021 12:02:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:52.021 12:02:45 -- common/autotest_common.sh@819 -- # '[' -z 2044435 ']' 00:24:52.021 12:02:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.021 12:02:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:52.021 12:02:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.021 12:02:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:52.021 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:24:52.021 [2024-06-10 12:02:45.663616] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:52.021 [2024-06-10 12:02:45.663720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.021 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.021 [2024-06-10 12:02:45.737807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:52.282 [2024-06-10 12:02:45.810061] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:52.282 [2024-06-10 12:02:45.810198] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.282 [2024-06-10 12:02:45.810208] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.282 [2024-06-10 12:02:45.810216] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.282 [2024-06-10 12:02:45.810367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.282 [2024-06-10 12:02:45.810582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.282 [2024-06-10 12:02:45.810739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.282 [2024-06-10 12:02:45.810740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.855 12:02:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:52.855 12:02:46 -- common/autotest_common.sh@852 -- # return 0 00:24:52.855 12:02:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:52.855 12:02:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:52.855 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.855 12:02:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.855 12:02:46 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:52.855 12:02:46 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:52.855 12:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.855 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.855 12:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.855 12:02:46 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:52.855 12:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.855 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.855 12:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.855 12:02:46 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:52.855 12:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.855 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.855 [2024-06-10 12:02:46.562183] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.855 12:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.855 12:02:46 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:52.855 12:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.855 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.855 Malloc1 00:24:52.855 12:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.855 12:02:46 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.855 12:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.855 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.855 12:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.855 12:02:46 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:52.855 12:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.855 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.855 12:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.855 12:02:46 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.855 12:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.855 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.855 [2024-06-10 12:02:46.617542] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.855 12:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.855 12:02:46 -- target/perf_adq.sh@73 -- # perfpid=2044690 00:24:52.855 12:02:46 -- target/perf_adq.sh@74 -- # sleep 2 00:24:52.855 12:02:46 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:53.116 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.030 12:02:48 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:55.030 12:02:48 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:55.030 12:02:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.030 12:02:48 -- target/perf_adq.sh@76 -- # wc -l 00:24:55.030 12:02:48 -- common/autotest_common.sh@10 -- # set +x 00:24:55.030 12:02:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.030 12:02:48 -- target/perf_adq.sh@76 -- # count=4 00:24:55.030 12:02:48 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:55.030 12:02:48 -- target/perf_adq.sh@81 -- # wait 2044690 00:25:03.173 Initializing NVMe Controllers 00:25:03.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:03.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:03.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:03.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:03.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:03.173 Initialization complete. Launching workers. 00:25:03.173 ======================================================== 00:25:03.173 Latency(us) 00:25:03.173 Device Information : IOPS MiB/s Average min max 00:25:03.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 15202.60 59.39 4209.83 1114.60 8120.23 00:25:03.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 16629.40 64.96 3847.99 813.05 8710.55 00:25:03.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11930.30 46.60 5364.45 1099.37 10263.70 00:25:03.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12989.40 50.74 4926.46 1091.88 11114.61 00:25:03.173 ======================================================== 00:25:03.173 Total : 56751.70 221.69 4510.55 813.05 11114.61 00:25:03.173 00:25:03.173 12:02:56 -- target/perf_adq.sh@82 -- # nvmftestfini 00:25:03.173 12:02:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:03.173 12:02:56 -- nvmf/common.sh@116 -- # sync 00:25:03.173 12:02:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:03.173 12:02:56 -- nvmf/common.sh@119 -- # set +e 00:25:03.173 12:02:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:03.173 12:02:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:03.173 rmmod nvme_tcp 00:25:03.173 rmmod nvme_fabrics 00:25:03.173 rmmod nvme_keyring 00:25:03.173 12:02:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:03.173 12:02:56 -- nvmf/common.sh@123 -- # set -e 00:25:03.173 12:02:56 -- nvmf/common.sh@124 -- # return 0 00:25:03.173 12:02:56 -- nvmf/common.sh@477 -- # '[' -n 2044435 ']' 00:25:03.173 12:02:56 -- nvmf/common.sh@478 -- # killprocess 2044435 00:25:03.173 12:02:56 -- common/autotest_common.sh@926 -- # '[' -z 2044435 ']' 00:25:03.173 12:02:56 -- common/autotest_common.sh@930 -- # kill -0 2044435 00:25:03.173 12:02:56 -- common/autotest_common.sh@931 -- # uname 00:25:03.173 12:02:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:03.173 12:02:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2044435 00:25:03.173 12:02:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:03.173 12:02:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:03.173 12:02:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2044435' 00:25:03.173 killing process with pid 2044435 00:25:03.173 12:02:56 -- common/autotest_common.sh@945 -- # kill 2044435 00:25:03.173 12:02:56 -- common/autotest_common.sh@950 -- # wait 2044435 00:25:03.434 12:02:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:03.434 12:02:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:03.434 12:02:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:03.434 12:02:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.434 12:02:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:03.434 12:02:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.434 12:02:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.434 12:02:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.404 12:02:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:05.404 12:02:59 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:25:05.404 12:02:59 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:07.358 12:03:00 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:08.744 12:03:02 -- target/perf_adq.sh@54 -- # sleep 5 00:25:14.037 12:03:07 -- target/perf_adq.sh@87 -- # nvmftestinit 00:25:14.037 12:03:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:14.037 12:03:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.037 12:03:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:14.037 12:03:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:14.037 12:03:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:14.037 12:03:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.037 12:03:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.037 12:03:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.037 12:03:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:14.037 12:03:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:14.037 12:03:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:14.037 12:03:07 -- common/autotest_common.sh@10 -- # set +x 00:25:14.037 12:03:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:14.037 12:03:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:14.037 12:03:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:14.037 12:03:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:14.037 12:03:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:14.037 12:03:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:14.037 12:03:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:14.037 12:03:07 -- nvmf/common.sh@294 -- # net_devs=() 00:25:14.037 12:03:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:14.037 12:03:07 -- nvmf/common.sh@295 -- # e810=() 00:25:14.037 12:03:07 -- nvmf/common.sh@295 -- # local -ga e810 00:25:14.037 12:03:07 -- nvmf/common.sh@296 -- # x722=() 00:25:14.037 12:03:07 -- nvmf/common.sh@296 -- # local -ga x722 00:25:14.037 12:03:07 -- nvmf/common.sh@297 -- # mlx=() 00:25:14.037 12:03:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:14.037 12:03:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.037 12:03:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:14.037 12:03:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:14.037 12:03:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:14.037 12:03:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:14.037 12:03:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:14.037 12:03:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:14.037 12:03:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.037 12:03:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:14.037 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:14.037 12:03:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:14.037 12:03:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:14.037 12:03:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.037 12:03:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.037 12:03:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:14.037 12:03:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.037 12:03:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:14.037 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:14.038 12:03:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:14.038 12:03:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.038 12:03:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.038 12:03:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.038 12:03:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.038 12:03:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:14.038 Found net devices under 0000:31:00.0: cvl_0_0 00:25:14.038 12:03:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.038 12:03:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.038 12:03:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.038 12:03:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.038 12:03:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.038 12:03:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:14.038 Found net devices under 0000:31:00.1: cvl_0_1 00:25:14.038 12:03:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.038 12:03:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:14.038 12:03:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:14.038 12:03:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:14.038 12:03:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.038 12:03:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.038 12:03:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.038 12:03:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:14.038 12:03:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.038 12:03:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.038 12:03:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:14.038 12:03:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.038 12:03:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.038 12:03:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:14.038 12:03:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:14.038 12:03:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.038 12:03:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.038 12:03:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.038 12:03:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.038 12:03:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:14.038 12:03:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.038 12:03:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.038 12:03:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.038 12:03:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:14.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:25:14.038 00:25:14.038 --- 10.0.0.2 ping statistics --- 00:25:14.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.038 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:25:14.038 12:03:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:25:14.038 00:25:14.038 --- 10.0.0.1 ping statistics --- 00:25:14.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.038 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:25:14.038 12:03:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.038 12:03:07 -- nvmf/common.sh@410 -- # return 0 00:25:14.038 12:03:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:14.038 12:03:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.038 12:03:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:14.038 12:03:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.038 12:03:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:14.038 12:03:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:14.038 12:03:07 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:25:14.038 12:03:07 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:14.038 12:03:07 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:14.038 12:03:07 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:14.038 net.core.busy_poll = 1 00:25:14.038 12:03:07 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:14.038 net.core.busy_read = 1 00:25:14.038 12:03:07 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:14.038 12:03:07 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:14.300 12:03:07 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:14.300 12:03:07 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:14.300 12:03:07 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:14.300 12:03:08 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:14.300 12:03:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:14.300 12:03:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:14.300 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:14.300 12:03:08 -- nvmf/common.sh@469 -- # nvmfpid=2049415 00:25:14.300 12:03:08 -- nvmf/common.sh@470 -- # waitforlisten 2049415 00:25:14.300 12:03:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:14.300 12:03:08 -- common/autotest_common.sh@819 -- # '[' -z 2049415 ']' 00:25:14.300 12:03:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.300 12:03:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:14.300 12:03:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.300 12:03:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:14.300 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:14.561 [2024-06-10 12:03:08.083190] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:14.561 [2024-06-10 12:03:08.083262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.562 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.562 [2024-06-10 12:03:08.154526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.562 [2024-06-10 12:03:08.227582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:14.562 [2024-06-10 12:03:08.227716] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.562 [2024-06-10 12:03:08.227726] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.562 [2024-06-10 12:03:08.227734] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.562 [2024-06-10 12:03:08.227906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.562 [2024-06-10 12:03:08.228010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.562 [2024-06-10 12:03:08.228172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.562 [2024-06-10 12:03:08.228172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.135 12:03:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:15.135 12:03:08 -- common/autotest_common.sh@852 -- # return 0 00:25:15.135 12:03:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:15.135 12:03:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:15.135 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:15.135 12:03:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.135 12:03:08 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:25:15.135 12:03:08 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:15.135 12:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.135 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:15.135 12:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.135 12:03:08 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:15.135 12:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.135 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:15.397 12:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.397 12:03:08 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:15.397 12:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.397 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:15.397 [2024-06-10 12:03:08.983468] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.397 12:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.397 12:03:08 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:15.397 12:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.397 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:15.397 Malloc1 00:25:15.397 12:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.397 12:03:09 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.397 12:03:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.397 12:03:09 -- common/autotest_common.sh@10 -- # set +x 00:25:15.397 12:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.397 12:03:09 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:15.397 12:03:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.397 12:03:09 -- common/autotest_common.sh@10 -- # set +x 00:25:15.397 12:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.397 12:03:09 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.397 12:03:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.397 12:03:09 -- common/autotest_common.sh@10 -- # set +x 00:25:15.397 [2024-06-10 12:03:09.038894] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.397 12:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.397 12:03:09 -- target/perf_adq.sh@94 -- # perfpid=2049730 00:25:15.397 12:03:09 -- target/perf_adq.sh@95 -- # sleep 2 00:25:15.397 12:03:09 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:15.397 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.314 12:03:11 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:17.314 12:03:11 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:17.314 12:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.314 12:03:11 -- common/autotest_common.sh@10 -- # set +x 00:25:17.314 12:03:11 -- target/perf_adq.sh@97 -- # wc -l 00:25:17.314 12:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.314 12:03:11 -- target/perf_adq.sh@97 -- # count=2 00:25:17.314 12:03:11 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:25:17.314 12:03:11 -- target/perf_adq.sh@103 -- # wait 2049730 00:25:27.321 Initializing NVMe Controllers 00:25:27.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:27.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:27.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:27.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:27.321 Initialization complete. Launching workers. 00:25:27.321 ======================================================== 00:25:27.321 Latency(us) 00:25:27.321 Device Information : IOPS MiB/s Average min max 00:25:27.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10482.30 40.95 6121.61 1238.42 53387.99 00:25:27.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10338.40 40.38 6202.28 1217.84 49993.47 00:25:27.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12543.40 49.00 5102.37 741.59 50829.29 00:25:27.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9460.20 36.95 6765.11 1278.76 54062.35 00:25:27.321 ======================================================== 00:25:27.321 Total : 42824.30 167.28 5984.70 741.59 54062.35 00:25:27.321 00:25:27.321 12:03:19 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:27.321 12:03:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:27.321 12:03:19 -- nvmf/common.sh@116 -- # sync 00:25:27.321 12:03:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:27.321 12:03:19 -- nvmf/common.sh@119 -- # set +e 00:25:27.321 12:03:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:27.321 12:03:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:27.321 rmmod nvme_tcp 00:25:27.321 rmmod nvme_fabrics 00:25:27.321 rmmod nvme_keyring 00:25:27.321 12:03:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:27.321 12:03:19 -- nvmf/common.sh@123 -- # set -e 00:25:27.321 12:03:19 -- nvmf/common.sh@124 -- # return 0 00:25:27.321 12:03:19 -- nvmf/common.sh@477 -- # '[' -n 2049415 ']' 00:25:27.321 12:03:19 -- nvmf/common.sh@478 -- # killprocess 2049415 00:25:27.321 12:03:19 -- common/autotest_common.sh@926 -- # '[' -z 2049415 ']' 00:25:27.321 12:03:19 -- common/autotest_common.sh@930 -- # kill -0 2049415 00:25:27.321 12:03:19 -- common/autotest_common.sh@931 -- # uname 00:25:27.321 12:03:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:27.321 12:03:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2049415 00:25:27.321 12:03:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:27.321 12:03:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:27.321 12:03:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2049415' 00:25:27.321 killing process with pid 2049415 00:25:27.321 12:03:19 -- common/autotest_common.sh@945 -- # kill 2049415 00:25:27.321 12:03:19 -- common/autotest_common.sh@950 -- # wait 2049415 00:25:27.321 12:03:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:27.321 12:03:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:27.321 12:03:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:27.321 12:03:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.321 12:03:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:27.321 12:03:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.321 12:03:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.321 12:03:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.893 12:03:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:27.893 12:03:21 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:27.893 00:25:27.893 real 0m51.778s 00:25:27.893 user 2m45.735s 00:25:27.893 sys 0m11.973s 00:25:27.893 12:03:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.893 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:25:27.893 ************************************ 00:25:27.893 END TEST nvmf_perf_adq 00:25:27.893 ************************************ 00:25:27.893 12:03:21 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:27.893 12:03:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:27.893 12:03:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.893 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:25:27.893 ************************************ 00:25:27.893 START TEST nvmf_shutdown 00:25:27.893 ************************************ 00:25:27.893 12:03:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:28.155 * Looking for test storage... 00:25:28.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:28.155 12:03:21 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.155 12:03:21 -- nvmf/common.sh@7 -- # uname -s 00:25:28.155 12:03:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.155 12:03:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.155 12:03:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.155 12:03:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.155 12:03:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.155 12:03:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.155 12:03:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.155 12:03:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.155 12:03:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.155 12:03:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.155 12:03:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:28.155 12:03:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:28.155 12:03:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.155 12:03:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.155 12:03:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.155 12:03:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.155 12:03:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.155 12:03:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.155 12:03:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.155 12:03:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.155 12:03:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.155 12:03:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.155 12:03:21 -- paths/export.sh@5 -- # export PATH 00:25:28.155 12:03:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.155 12:03:21 -- nvmf/common.sh@46 -- # : 0 00:25:28.155 12:03:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:28.155 12:03:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:28.155 12:03:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:28.155 12:03:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.155 12:03:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.155 12:03:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:28.155 12:03:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:28.155 12:03:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:28.155 12:03:21 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:28.155 12:03:21 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:28.155 12:03:21 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:28.155 12:03:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:28.155 12:03:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:28.155 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.155 ************************************ 00:25:28.155 START TEST nvmf_shutdown_tc1 00:25:28.155 ************************************ 00:25:28.155 12:03:21 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:25:28.155 12:03:21 -- target/shutdown.sh@74 -- # starttarget 00:25:28.155 12:03:21 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:28.155 12:03:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:28.155 12:03:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.155 12:03:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:28.155 12:03:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:28.155 12:03:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:28.155 12:03:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.155 12:03:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.155 12:03:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.155 12:03:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:28.155 12:03:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:28.156 12:03:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:28.156 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:25:36.297 12:03:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:36.297 12:03:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:36.297 12:03:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:36.297 12:03:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:36.297 12:03:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:36.297 12:03:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:36.297 12:03:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:36.297 12:03:28 -- nvmf/common.sh@294 -- # net_devs=() 00:25:36.297 12:03:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:36.297 12:03:28 -- nvmf/common.sh@295 -- # e810=() 00:25:36.297 12:03:28 -- nvmf/common.sh@295 -- # local -ga e810 00:25:36.297 12:03:28 -- nvmf/common.sh@296 -- # x722=() 00:25:36.297 12:03:28 -- nvmf/common.sh@296 -- # local -ga x722 00:25:36.297 12:03:28 -- nvmf/common.sh@297 -- # mlx=() 00:25:36.297 12:03:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:36.297 12:03:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.297 12:03:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:36.297 12:03:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:36.297 12:03:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:36.297 12:03:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.297 12:03:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:36.297 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:36.297 12:03:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.297 12:03:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:36.297 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:36.297 12:03:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:36.297 12:03:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.297 12:03:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.297 12:03:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.297 12:03:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.297 12:03:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:36.297 Found net devices under 0000:31:00.0: cvl_0_0 00:25:36.297 12:03:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.297 12:03:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.297 12:03:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.297 12:03:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.297 12:03:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.297 12:03:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:36.297 Found net devices under 0000:31:00.1: cvl_0_1 00:25:36.297 12:03:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.297 12:03:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:36.297 12:03:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:36.297 12:03:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:36.297 12:03:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:36.297 12:03:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.298 12:03:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.298 12:03:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.298 12:03:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:36.298 12:03:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.298 12:03:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.298 12:03:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:36.298 12:03:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.298 12:03:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.298 12:03:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:36.298 12:03:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:36.298 12:03:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.298 12:03:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.298 12:03:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.298 12:03:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.298 12:03:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:36.298 12:03:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.298 12:03:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.298 12:03:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.298 12:03:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:36.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:25:36.298 00:25:36.298 --- 10.0.0.2 ping statistics --- 00:25:36.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.298 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:25:36.298 12:03:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:25:36.298 00:25:36.298 --- 10.0.0.1 ping statistics --- 00:25:36.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.298 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:25:36.298 12:03:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.298 12:03:28 -- nvmf/common.sh@410 -- # return 0 00:25:36.298 12:03:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:36.298 12:03:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.298 12:03:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:36.298 12:03:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:36.298 12:03:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.298 12:03:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:36.298 12:03:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:36.298 12:03:28 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:36.298 12:03:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:36.298 12:03:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:36.298 12:03:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.298 12:03:28 -- nvmf/common.sh@469 -- # nvmfpid=2056429 00:25:36.298 12:03:28 -- nvmf/common.sh@470 -- # waitforlisten 2056429 00:25:36.298 12:03:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:36.298 12:03:28 -- common/autotest_common.sh@819 -- # '[' -z 2056429 ']' 00:25:36.298 12:03:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.298 12:03:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:36.298 12:03:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.298 12:03:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:36.298 12:03:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.298 [2024-06-10 12:03:29.016464] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:36.298 [2024-06-10 12:03:29.016526] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.298 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.298 [2024-06-10 12:03:29.103107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.298 [2024-06-10 12:03:29.194490] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:36.298 [2024-06-10 12:03:29.194641] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.298 [2024-06-10 12:03:29.194657] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.298 [2024-06-10 12:03:29.194665] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.298 [2024-06-10 12:03:29.194817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.298 [2024-06-10 12:03:29.194989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.298 [2024-06-10 12:03:29.195155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.298 [2024-06-10 12:03:29.195155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:36.298 12:03:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:36.298 12:03:29 -- common/autotest_common.sh@852 -- # return 0 00:25:36.298 12:03:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:36.298 12:03:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:36.298 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.298 12:03:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.298 12:03:29 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.298 12:03:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.298 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.298 [2024-06-10 12:03:29.845295] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.298 12:03:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.298 12:03:29 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:36.298 12:03:29 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:36.298 12:03:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:36.298 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.298 12:03:29 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:36.298 12:03:29 -- target/shutdown.sh@28 -- # cat 00:25:36.298 12:03:29 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:36.298 12:03:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.298 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.298 Malloc1 00:25:36.298 [2024-06-10 12:03:29.948729] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.298 Malloc2 00:25:36.298 Malloc3 00:25:36.298 Malloc4 00:25:36.558 Malloc5 00:25:36.558 Malloc6 00:25:36.558 Malloc7 00:25:36.558 Malloc8 00:25:36.558 Malloc9 00:25:36.558 Malloc10 00:25:36.558 12:03:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.558 12:03:30 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:36.558 12:03:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:36.558 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:25:36.819 12:03:30 -- target/shutdown.sh@78 -- # perfpid=2056655 00:25:36.819 12:03:30 -- target/shutdown.sh@79 -- # waitforlisten 2056655 /var/tmp/bdevperf.sock 00:25:36.819 12:03:30 -- common/autotest_common.sh@819 -- # '[' -z 2056655 ']' 00:25:36.819 12:03:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.819 12:03:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:36.819 12:03:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.819 12:03:30 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:36.819 12:03:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:36.819 12:03:30 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:36.819 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:25:36.820 12:03:30 -- nvmf/common.sh@520 -- # config=() 00:25:36.820 12:03:30 -- nvmf/common.sh@520 -- # local subsystem config 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 [2024-06-10 12:03:30.412619] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:36.820 [2024-06-10 12:03:30.412687] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 12:03:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.820 { 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme$subsystem", 00:25:36.820 "trtype": "$TEST_TRANSPORT", 00:25:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "$NVMF_PORT", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.820 "hdgst": ${hdgst:-false}, 00:25:36.820 "ddgst": ${ddgst:-false} 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 } 00:25:36.820 EOF 00:25:36.820 )") 00:25:36.820 12:03:30 -- nvmf/common.sh@542 -- # cat 00:25:36.820 12:03:30 -- nvmf/common.sh@544 -- # jq . 00:25:36.820 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.820 12:03:30 -- nvmf/common.sh@545 -- # IFS=, 00:25:36.820 12:03:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme1", 00:25:36.820 "trtype": "tcp", 00:25:36.820 "traddr": "10.0.0.2", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "4420", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:36.820 "hdgst": false, 00:25:36.820 "ddgst": false 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 },{ 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme2", 00:25:36.820 "trtype": "tcp", 00:25:36.820 "traddr": "10.0.0.2", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "4420", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:36.820 "hdgst": false, 00:25:36.820 "ddgst": false 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 },{ 00:25:36.820 "params": { 00:25:36.820 "name": "Nvme3", 00:25:36.820 "trtype": "tcp", 00:25:36.820 "traddr": "10.0.0.2", 00:25:36.820 "adrfam": "ipv4", 00:25:36.820 "trsvcid": "4420", 00:25:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:36.820 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:36.820 "hdgst": false, 00:25:36.820 "ddgst": false 00:25:36.820 }, 00:25:36.820 "method": "bdev_nvme_attach_controller" 00:25:36.820 },{ 00:25:36.821 "params": { 00:25:36.821 "name": "Nvme4", 00:25:36.821 "trtype": "tcp", 00:25:36.821 "traddr": "10.0.0.2", 00:25:36.821 "adrfam": "ipv4", 00:25:36.821 "trsvcid": "4420", 00:25:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:36.821 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:36.821 "hdgst": false, 00:25:36.821 "ddgst": false 00:25:36.821 }, 00:25:36.821 "method": "bdev_nvme_attach_controller" 00:25:36.821 },{ 00:25:36.821 "params": { 00:25:36.821 "name": "Nvme5", 00:25:36.821 "trtype": "tcp", 00:25:36.821 "traddr": "10.0.0.2", 00:25:36.821 "adrfam": "ipv4", 00:25:36.821 "trsvcid": "4420", 00:25:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:36.821 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:36.821 "hdgst": false, 00:25:36.821 "ddgst": false 00:25:36.821 }, 00:25:36.821 "method": "bdev_nvme_attach_controller" 00:25:36.821 },{ 00:25:36.821 "params": { 00:25:36.821 "name": "Nvme6", 00:25:36.821 "trtype": "tcp", 00:25:36.821 "traddr": "10.0.0.2", 00:25:36.821 "adrfam": "ipv4", 00:25:36.821 "trsvcid": "4420", 00:25:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:36.821 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:36.821 "hdgst": false, 00:25:36.821 "ddgst": false 00:25:36.821 }, 00:25:36.821 "method": "bdev_nvme_attach_controller" 00:25:36.821 },{ 00:25:36.821 "params": { 00:25:36.821 "name": "Nvme7", 00:25:36.821 "trtype": "tcp", 00:25:36.821 "traddr": "10.0.0.2", 00:25:36.821 "adrfam": "ipv4", 00:25:36.821 "trsvcid": "4420", 00:25:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:36.821 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:36.821 "hdgst": false, 00:25:36.821 "ddgst": false 00:25:36.821 }, 00:25:36.821 "method": "bdev_nvme_attach_controller" 00:25:36.821 },{ 00:25:36.821 "params": { 00:25:36.821 "name": "Nvme8", 00:25:36.821 "trtype": "tcp", 00:25:36.821 "traddr": "10.0.0.2", 00:25:36.821 "adrfam": "ipv4", 00:25:36.821 "trsvcid": "4420", 00:25:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:36.821 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:36.821 "hdgst": false, 00:25:36.821 "ddgst": false 00:25:36.821 }, 00:25:36.821 "method": "bdev_nvme_attach_controller" 00:25:36.821 },{ 00:25:36.821 "params": { 00:25:36.821 "name": "Nvme9", 00:25:36.821 "trtype": "tcp", 00:25:36.821 "traddr": "10.0.0.2", 00:25:36.821 "adrfam": "ipv4", 00:25:36.821 "trsvcid": "4420", 00:25:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:36.821 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:36.821 "hdgst": false, 00:25:36.821 "ddgst": false 00:25:36.821 }, 00:25:36.821 "method": "bdev_nvme_attach_controller" 00:25:36.821 },{ 00:25:36.821 "params": { 00:25:36.821 "name": "Nvme10", 00:25:36.821 "trtype": "tcp", 00:25:36.821 "traddr": "10.0.0.2", 00:25:36.821 "adrfam": "ipv4", 00:25:36.821 "trsvcid": "4420", 00:25:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:36.821 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:36.821 "hdgst": false, 00:25:36.821 "ddgst": false 00:25:36.821 }, 00:25:36.821 "method": "bdev_nvme_attach_controller" 00:25:36.821 }' 00:25:36.821 [2024-06-10 12:03:30.475160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.821 [2024-06-10 12:03:30.537956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.203 12:03:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:38.203 12:03:31 -- common/autotest_common.sh@852 -- # return 0 00:25:38.203 12:03:31 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:38.203 12:03:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.203 12:03:31 -- common/autotest_common.sh@10 -- # set +x 00:25:38.203 12:03:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.203 12:03:31 -- target/shutdown.sh@83 -- # kill -9 2056655 00:25:38.203 12:03:31 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:38.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2056655 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:38.203 12:03:31 -- target/shutdown.sh@87 -- # sleep 1 00:25:39.142 12:03:32 -- target/shutdown.sh@88 -- # kill -0 2056429 00:25:39.142 12:03:32 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:39.142 12:03:32 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:39.142 12:03:32 -- nvmf/common.sh@520 -- # config=() 00:25:39.142 12:03:32 -- nvmf/common.sh@520 -- # local subsystem config 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 [2024-06-10 12:03:32.876509] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:39.142 [2024-06-10 12:03:32.876564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057200 ] 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 12:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.142 { 00:25:39.142 "params": { 00:25:39.142 "name": "Nvme$subsystem", 00:25:39.142 "trtype": "$TEST_TRANSPORT", 00:25:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.142 "adrfam": "ipv4", 00:25:39.142 "trsvcid": "$NVMF_PORT", 00:25:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.142 "hdgst": ${hdgst:-false}, 00:25:39.142 "ddgst": ${ddgst:-false} 00:25:39.142 }, 00:25:39.142 "method": "bdev_nvme_attach_controller" 00:25:39.142 } 00:25:39.142 EOF 00:25:39.142 )") 00:25:39.142 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.142 12:03:32 -- nvmf/common.sh@542 -- # cat 00:25:39.142 12:03:32 -- nvmf/common.sh@544 -- # jq . 00:25:39.402 12:03:32 -- nvmf/common.sh@545 -- # IFS=, 00:25:39.402 12:03:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:39.402 "params": { 00:25:39.402 "name": "Nvme1", 00:25:39.402 "trtype": "tcp", 00:25:39.402 "traddr": "10.0.0.2", 00:25:39.402 "adrfam": "ipv4", 00:25:39.402 "trsvcid": "4420", 00:25:39.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:39.402 "hdgst": false, 00:25:39.402 "ddgst": false 00:25:39.402 }, 00:25:39.402 "method": "bdev_nvme_attach_controller" 00:25:39.402 },{ 00:25:39.402 "params": { 00:25:39.402 "name": "Nvme2", 00:25:39.402 "trtype": "tcp", 00:25:39.402 "traddr": "10.0.0.2", 00:25:39.402 "adrfam": "ipv4", 00:25:39.402 "trsvcid": "4420", 00:25:39.402 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:39.402 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:39.402 "hdgst": false, 00:25:39.402 "ddgst": false 00:25:39.402 }, 00:25:39.402 "method": "bdev_nvme_attach_controller" 00:25:39.402 },{ 00:25:39.402 "params": { 00:25:39.402 "name": "Nvme3", 00:25:39.402 "trtype": "tcp", 00:25:39.402 "traddr": "10.0.0.2", 00:25:39.402 "adrfam": "ipv4", 00:25:39.402 "trsvcid": "4420", 00:25:39.402 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:39.402 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:39.402 "hdgst": false, 00:25:39.402 "ddgst": false 00:25:39.402 }, 00:25:39.402 "method": "bdev_nvme_attach_controller" 00:25:39.402 },{ 00:25:39.402 "params": { 00:25:39.402 "name": "Nvme4", 00:25:39.402 "trtype": "tcp", 00:25:39.402 "traddr": "10.0.0.2", 00:25:39.402 "adrfam": "ipv4", 00:25:39.402 "trsvcid": "4420", 00:25:39.402 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:39.402 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:39.402 "hdgst": false, 00:25:39.402 "ddgst": false 00:25:39.402 }, 00:25:39.402 "method": "bdev_nvme_attach_controller" 00:25:39.402 },{ 00:25:39.402 "params": { 00:25:39.402 "name": "Nvme5", 00:25:39.402 "trtype": "tcp", 00:25:39.402 "traddr": "10.0.0.2", 00:25:39.402 "adrfam": "ipv4", 00:25:39.402 "trsvcid": "4420", 00:25:39.402 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:39.402 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:39.402 "hdgst": false, 00:25:39.402 "ddgst": false 00:25:39.402 }, 00:25:39.402 "method": "bdev_nvme_attach_controller" 00:25:39.402 },{ 00:25:39.403 "params": { 00:25:39.403 "name": "Nvme6", 00:25:39.403 "trtype": "tcp", 00:25:39.403 "traddr": "10.0.0.2", 00:25:39.403 "adrfam": "ipv4", 00:25:39.403 "trsvcid": "4420", 00:25:39.403 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:39.403 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:39.403 "hdgst": false, 00:25:39.403 "ddgst": false 00:25:39.403 }, 00:25:39.403 "method": "bdev_nvme_attach_controller" 00:25:39.403 },{ 00:25:39.403 "params": { 00:25:39.403 "name": "Nvme7", 00:25:39.403 "trtype": "tcp", 00:25:39.403 "traddr": "10.0.0.2", 00:25:39.403 "adrfam": "ipv4", 00:25:39.403 "trsvcid": "4420", 00:25:39.403 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:39.403 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:39.403 "hdgst": false, 00:25:39.403 "ddgst": false 00:25:39.403 }, 00:25:39.403 "method": "bdev_nvme_attach_controller" 00:25:39.403 },{ 00:25:39.403 "params": { 00:25:39.403 "name": "Nvme8", 00:25:39.403 "trtype": "tcp", 00:25:39.403 "traddr": "10.0.0.2", 00:25:39.403 "adrfam": "ipv4", 00:25:39.403 "trsvcid": "4420", 00:25:39.403 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:39.403 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:39.403 "hdgst": false, 00:25:39.403 "ddgst": false 00:25:39.403 }, 00:25:39.403 "method": "bdev_nvme_attach_controller" 00:25:39.403 },{ 00:25:39.403 "params": { 00:25:39.403 "name": "Nvme9", 00:25:39.403 "trtype": "tcp", 00:25:39.403 "traddr": "10.0.0.2", 00:25:39.403 "adrfam": "ipv4", 00:25:39.403 "trsvcid": "4420", 00:25:39.403 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:39.403 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:39.403 "hdgst": false, 00:25:39.403 "ddgst": false 00:25:39.403 }, 00:25:39.403 "method": "bdev_nvme_attach_controller" 00:25:39.403 },{ 00:25:39.403 "params": { 00:25:39.403 "name": "Nvme10", 00:25:39.403 "trtype": "tcp", 00:25:39.403 "traddr": "10.0.0.2", 00:25:39.403 "adrfam": "ipv4", 00:25:39.403 "trsvcid": "4420", 00:25:39.403 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:39.403 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:39.403 "hdgst": false, 00:25:39.403 "ddgst": false 00:25:39.403 }, 00:25:39.403 "method": "bdev_nvme_attach_controller" 00:25:39.403 }' 00:25:39.403 [2024-06-10 12:03:32.938135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.403 [2024-06-10 12:03:33.000094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.785 Running I/O for 1 seconds... 00:25:42.168 00:25:42.168 Latency(us) 00:25:42.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.168 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme1n1 : 1.05 423.61 26.48 0.00 0.00 145545.64 13434.88 133693.44 00:25:42.168 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme2n1 : 1.06 407.62 25.48 0.00 0.00 151855.02 27962.03 134567.25 00:25:42.168 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme3n1 : 1.09 445.02 27.81 0.00 0.00 139604.46 15947.09 115343.36 00:25:42.168 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme4n1 : 1.07 412.35 25.77 0.00 0.00 149647.44 4778.67 142431.57 00:25:42.168 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme5n1 : 1.13 426.80 26.68 0.00 0.00 138657.60 14417.92 112721.92 00:25:42.168 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme6n1 : 1.09 444.89 27.81 0.00 0.00 136391.19 9393.49 123207.68 00:25:42.168 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme7n1 : 1.09 443.17 27.70 0.00 0.00 135973.97 14417.92 117090.99 00:25:42.168 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme8n1 : 1.13 425.74 26.61 0.00 0.00 135680.66 14417.92 116217.17 00:25:42.168 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme9n1 : 1.09 444.12 27.76 0.00 0.00 133579.35 16056.32 119712.43 00:25:42.168 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.168 Verification LBA range: start 0x0 length 0x400 00:25:42.168 Nvme10n1 : 1.10 452.53 28.28 0.00 0.00 130365.77 8628.91 116217.17 00:25:42.168 =================================================================================================================== 00:25:42.168 Total : 4325.85 270.37 0.00 0.00 139443.46 4778.67 142431.57 00:25:42.168 12:03:35 -- target/shutdown.sh@93 -- # stoptarget 00:25:42.168 12:03:35 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:42.168 12:03:35 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:42.168 12:03:35 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:42.168 12:03:35 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:42.168 12:03:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:42.168 12:03:35 -- nvmf/common.sh@116 -- # sync 00:25:42.168 12:03:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:42.168 12:03:35 -- nvmf/common.sh@119 -- # set +e 00:25:42.168 12:03:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:42.168 12:03:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:42.168 rmmod nvme_tcp 00:25:42.168 rmmod nvme_fabrics 00:25:42.168 rmmod nvme_keyring 00:25:42.168 12:03:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:42.168 12:03:35 -- nvmf/common.sh@123 -- # set -e 00:25:42.168 12:03:35 -- nvmf/common.sh@124 -- # return 0 00:25:42.169 12:03:35 -- nvmf/common.sh@477 -- # '[' -n 2056429 ']' 00:25:42.169 12:03:35 -- nvmf/common.sh@478 -- # killprocess 2056429 00:25:42.169 12:03:35 -- common/autotest_common.sh@926 -- # '[' -z 2056429 ']' 00:25:42.169 12:03:35 -- common/autotest_common.sh@930 -- # kill -0 2056429 00:25:42.169 12:03:35 -- common/autotest_common.sh@931 -- # uname 00:25:42.169 12:03:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:42.169 12:03:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2056429 00:25:42.169 12:03:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:42.169 12:03:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:42.169 12:03:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2056429' 00:25:42.169 killing process with pid 2056429 00:25:42.169 12:03:35 -- common/autotest_common.sh@945 -- # kill 2056429 00:25:42.169 12:03:35 -- common/autotest_common.sh@950 -- # wait 2056429 00:25:42.428 12:03:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:42.428 12:03:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:42.428 12:03:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:42.428 12:03:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.428 12:03:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:42.428 12:03:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.428 12:03:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.428 12:03:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.971 12:03:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:44.971 00:25:44.971 real 0m16.370s 00:25:44.971 user 0m33.474s 00:25:44.971 sys 0m6.487s 00:25:44.971 12:03:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.971 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:25:44.971 ************************************ 00:25:44.971 END TEST nvmf_shutdown_tc1 00:25:44.971 ************************************ 00:25:44.971 12:03:38 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:44.971 12:03:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:44.971 12:03:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:44.971 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:25:44.971 ************************************ 00:25:44.971 START TEST nvmf_shutdown_tc2 00:25:44.971 ************************************ 00:25:44.971 12:03:38 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:25:44.971 12:03:38 -- target/shutdown.sh@98 -- # starttarget 00:25:44.971 12:03:38 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:44.971 12:03:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:44.971 12:03:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.971 12:03:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:44.971 12:03:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:44.971 12:03:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:44.971 12:03:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.971 12:03:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.971 12:03:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.971 12:03:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:44.971 12:03:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:44.971 12:03:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:44.971 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:25:44.971 12:03:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:44.971 12:03:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:44.971 12:03:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:44.971 12:03:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:44.971 12:03:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:44.971 12:03:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:44.971 12:03:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:44.971 12:03:38 -- nvmf/common.sh@294 -- # net_devs=() 00:25:44.971 12:03:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:44.971 12:03:38 -- nvmf/common.sh@295 -- # e810=() 00:25:44.971 12:03:38 -- nvmf/common.sh@295 -- # local -ga e810 00:25:44.971 12:03:38 -- nvmf/common.sh@296 -- # x722=() 00:25:44.971 12:03:38 -- nvmf/common.sh@296 -- # local -ga x722 00:25:44.971 12:03:38 -- nvmf/common.sh@297 -- # mlx=() 00:25:44.971 12:03:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:44.971 12:03:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.971 12:03:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.971 12:03:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.971 12:03:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.971 12:03:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.971 12:03:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.972 12:03:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.972 12:03:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.972 12:03:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.972 12:03:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.972 12:03:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.972 12:03:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:44.972 12:03:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:44.972 12:03:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:44.972 12:03:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:44.972 12:03:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:44.972 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:44.972 12:03:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:44.972 12:03:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:44.972 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:44.972 12:03:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:44.972 12:03:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:44.972 12:03:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.972 12:03:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:44.972 12:03:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.972 12:03:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:44.972 Found net devices under 0000:31:00.0: cvl_0_0 00:25:44.972 12:03:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.972 12:03:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:44.972 12:03:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.972 12:03:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:44.972 12:03:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.972 12:03:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:44.972 Found net devices under 0000:31:00.1: cvl_0_1 00:25:44.972 12:03:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.972 12:03:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:44.972 12:03:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:44.972 12:03:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:44.972 12:03:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.972 12:03:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.972 12:03:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:44.972 12:03:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:44.972 12:03:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:44.972 12:03:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:44.972 12:03:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:44.972 12:03:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:44.972 12:03:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.972 12:03:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:44.972 12:03:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:44.972 12:03:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:44.972 12:03:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:44.972 12:03:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:44.972 12:03:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:44.972 12:03:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:44.972 12:03:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:44.972 12:03:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:44.972 12:03:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:44.972 12:03:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:44.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:25:44.972 00:25:44.972 --- 10.0.0.2 ping statistics --- 00:25:44.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.972 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:25:44.972 12:03:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:44.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:25:44.972 00:25:44.972 --- 10.0.0.1 ping statistics --- 00:25:44.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.972 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:25:44.972 12:03:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.972 12:03:38 -- nvmf/common.sh@410 -- # return 0 00:25:44.972 12:03:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:44.972 12:03:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.972 12:03:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:44.972 12:03:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.972 12:03:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:44.972 12:03:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:44.972 12:03:38 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:44.972 12:03:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:44.972 12:03:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:44.972 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:25:44.972 12:03:38 -- nvmf/common.sh@469 -- # nvmfpid=2058328 00:25:44.972 12:03:38 -- nvmf/common.sh@470 -- # waitforlisten 2058328 00:25:44.972 12:03:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:44.972 12:03:38 -- common/autotest_common.sh@819 -- # '[' -z 2058328 ']' 00:25:44.972 12:03:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.972 12:03:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:44.972 12:03:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.972 12:03:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:44.972 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:25:44.972 [2024-06-10 12:03:38.611862] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:44.972 [2024-06-10 12:03:38.611924] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.972 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.972 [2024-06-10 12:03:38.697159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.233 [2024-06-10 12:03:38.757008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:45.233 [2024-06-10 12:03:38.757106] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.233 [2024-06-10 12:03:38.757112] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.233 [2024-06-10 12:03:38.757117] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.233 [2024-06-10 12:03:38.757250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.233 [2024-06-10 12:03:38.757389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.233 [2024-06-10 12:03:38.757522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.233 [2024-06-10 12:03:38.757524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:45.803 12:03:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:45.803 12:03:39 -- common/autotest_common.sh@852 -- # return 0 00:25:45.803 12:03:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:45.803 12:03:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:45.803 12:03:39 -- common/autotest_common.sh@10 -- # set +x 00:25:45.803 12:03:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.803 12:03:39 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.803 12:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.803 12:03:39 -- common/autotest_common.sh@10 -- # set +x 00:25:45.803 [2024-06-10 12:03:39.431270] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.803 12:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.803 12:03:39 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:45.803 12:03:39 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:45.803 12:03:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:45.803 12:03:39 -- common/autotest_common.sh@10 -- # set +x 00:25:45.803 12:03:39 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:45.803 12:03:39 -- target/shutdown.sh@28 -- # cat 00:25:45.803 12:03:39 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:45.803 12:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.803 12:03:39 -- common/autotest_common.sh@10 -- # set +x 00:25:45.803 Malloc1 00:25:45.803 [2024-06-10 12:03:39.529983] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.803 Malloc2 00:25:46.064 Malloc3 00:25:46.064 Malloc4 00:25:46.064 Malloc5 00:25:46.064 Malloc6 00:25:46.064 Malloc7 00:25:46.064 Malloc8 00:25:46.064 Malloc9 00:25:46.325 Malloc10 00:25:46.325 12:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.325 12:03:39 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:46.325 12:03:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:46.325 12:03:39 -- common/autotest_common.sh@10 -- # set +x 00:25:46.325 12:03:39 -- target/shutdown.sh@102 -- # perfpid=2058710 00:25:46.325 12:03:39 -- target/shutdown.sh@103 -- # waitforlisten 2058710 /var/tmp/bdevperf.sock 00:25:46.325 12:03:39 -- common/autotest_common.sh@819 -- # '[' -z 2058710 ']' 00:25:46.325 12:03:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.325 12:03:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:46.325 12:03:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.325 12:03:39 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:46.325 12:03:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:46.325 12:03:39 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:46.325 12:03:39 -- common/autotest_common.sh@10 -- # set +x 00:25:46.325 12:03:39 -- nvmf/common.sh@520 -- # config=() 00:25:46.325 12:03:39 -- nvmf/common.sh@520 -- # local subsystem config 00:25:46.325 12:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.325 { 00:25:46.325 "params": { 00:25:46.325 "name": "Nvme$subsystem", 00:25:46.325 "trtype": "$TEST_TRANSPORT", 00:25:46.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.325 "adrfam": "ipv4", 00:25:46.325 "trsvcid": "$NVMF_PORT", 00:25:46.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.325 "hdgst": ${hdgst:-false}, 00:25:46.325 "ddgst": ${ddgst:-false} 00:25:46.325 }, 00:25:46.325 "method": "bdev_nvme_attach_controller" 00:25:46.325 } 00:25:46.325 EOF 00:25:46.325 )") 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # cat 00:25:46.325 12:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.325 { 00:25:46.325 "params": { 00:25:46.325 "name": "Nvme$subsystem", 00:25:46.325 "trtype": "$TEST_TRANSPORT", 00:25:46.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.325 "adrfam": "ipv4", 00:25:46.325 "trsvcid": "$NVMF_PORT", 00:25:46.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.325 "hdgst": ${hdgst:-false}, 00:25:46.325 "ddgst": ${ddgst:-false} 00:25:46.325 }, 00:25:46.325 "method": "bdev_nvme_attach_controller" 00:25:46.325 } 00:25:46.325 EOF 00:25:46.325 )") 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # cat 00:25:46.325 12:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.325 { 00:25:46.325 "params": { 00:25:46.325 "name": "Nvme$subsystem", 00:25:46.325 "trtype": "$TEST_TRANSPORT", 00:25:46.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.325 "adrfam": "ipv4", 00:25:46.325 "trsvcid": "$NVMF_PORT", 00:25:46.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.325 "hdgst": ${hdgst:-false}, 00:25:46.325 "ddgst": ${ddgst:-false} 00:25:46.325 }, 00:25:46.325 "method": "bdev_nvme_attach_controller" 00:25:46.325 } 00:25:46.325 EOF 00:25:46.325 )") 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # cat 00:25:46.325 12:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.325 { 00:25:46.325 "params": { 00:25:46.325 "name": "Nvme$subsystem", 00:25:46.325 "trtype": "$TEST_TRANSPORT", 00:25:46.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.325 "adrfam": "ipv4", 00:25:46.325 "trsvcid": "$NVMF_PORT", 00:25:46.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.325 "hdgst": ${hdgst:-false}, 00:25:46.325 "ddgst": ${ddgst:-false} 00:25:46.325 }, 00:25:46.325 "method": "bdev_nvme_attach_controller" 00:25:46.325 } 00:25:46.325 EOF 00:25:46.325 )") 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # cat 00:25:46.325 12:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.325 { 00:25:46.325 "params": { 00:25:46.325 "name": "Nvme$subsystem", 00:25:46.325 "trtype": "$TEST_TRANSPORT", 00:25:46.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.325 "adrfam": "ipv4", 00:25:46.325 "trsvcid": "$NVMF_PORT", 00:25:46.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.325 "hdgst": ${hdgst:-false}, 00:25:46.325 "ddgst": ${ddgst:-false} 00:25:46.325 }, 00:25:46.325 "method": "bdev_nvme_attach_controller" 00:25:46.325 } 00:25:46.325 EOF 00:25:46.325 )") 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # cat 00:25:46.325 12:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.325 { 00:25:46.325 "params": { 00:25:46.325 "name": "Nvme$subsystem", 00:25:46.325 "trtype": "$TEST_TRANSPORT", 00:25:46.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.325 "adrfam": "ipv4", 00:25:46.325 "trsvcid": "$NVMF_PORT", 00:25:46.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.325 "hdgst": ${hdgst:-false}, 00:25:46.325 "ddgst": ${ddgst:-false} 00:25:46.325 }, 00:25:46.325 "method": "bdev_nvme_attach_controller" 00:25:46.325 } 00:25:46.325 EOF 00:25:46.325 )") 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # cat 00:25:46.325 [2024-06-10 12:03:39.978958] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:46.325 [2024-06-10 12:03:39.979010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058710 ] 00:25:46.325 12:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.325 { 00:25:46.325 "params": { 00:25:46.325 "name": "Nvme$subsystem", 00:25:46.325 "trtype": "$TEST_TRANSPORT", 00:25:46.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.325 "adrfam": "ipv4", 00:25:46.325 "trsvcid": "$NVMF_PORT", 00:25:46.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.325 "hdgst": ${hdgst:-false}, 00:25:46.325 "ddgst": ${ddgst:-false} 00:25:46.325 }, 00:25:46.325 "method": "bdev_nvme_attach_controller" 00:25:46.325 } 00:25:46.325 EOF 00:25:46.325 )") 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # cat 00:25:46.325 12:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.325 12:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.325 { 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme$subsystem", 00:25:46.326 "trtype": "$TEST_TRANSPORT", 00:25:46.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "$NVMF_PORT", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.326 "hdgst": ${hdgst:-false}, 00:25:46.326 "ddgst": ${ddgst:-false} 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 } 00:25:46.326 EOF 00:25:46.326 )") 00:25:46.326 12:03:39 -- nvmf/common.sh@542 -- # cat 00:25:46.326 12:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.326 12:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.326 { 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme$subsystem", 00:25:46.326 "trtype": "$TEST_TRANSPORT", 00:25:46.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "$NVMF_PORT", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.326 "hdgst": ${hdgst:-false}, 00:25:46.326 "ddgst": ${ddgst:-false} 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 } 00:25:46.326 EOF 00:25:46.326 )") 00:25:46.326 12:03:39 -- nvmf/common.sh@542 -- # cat 00:25:46.326 12:03:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.326 12:03:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.326 { 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme$subsystem", 00:25:46.326 "trtype": "$TEST_TRANSPORT", 00:25:46.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "$NVMF_PORT", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.326 "hdgst": ${hdgst:-false}, 00:25:46.326 "ddgst": ${ddgst:-false} 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 } 00:25:46.326 EOF 00:25:46.326 )") 00:25:46.326 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.326 12:03:40 -- nvmf/common.sh@542 -- # cat 00:25:46.326 12:03:40 -- nvmf/common.sh@544 -- # jq . 00:25:46.326 12:03:40 -- nvmf/common.sh@545 -- # IFS=, 00:25:46.326 12:03:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme1", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 },{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme2", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 },{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme3", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 },{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme4", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 },{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme5", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 },{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme6", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 },{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme7", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 },{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme8", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 },{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme9", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 },{ 00:25:46.326 "params": { 00:25:46.326 "name": "Nvme10", 00:25:46.326 "trtype": "tcp", 00:25:46.326 "traddr": "10.0.0.2", 00:25:46.326 "adrfam": "ipv4", 00:25:46.326 "trsvcid": "4420", 00:25:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:46.326 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:46.326 "hdgst": false, 00:25:46.326 "ddgst": false 00:25:46.326 }, 00:25:46.326 "method": "bdev_nvme_attach_controller" 00:25:46.326 }' 00:25:46.326 [2024-06-10 12:03:40.045093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.587 [2024-06-10 12:03:40.110083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.969 Running I/O for 10 seconds... 00:25:48.540 12:03:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:48.540 12:03:42 -- common/autotest_common.sh@852 -- # return 0 00:25:48.540 12:03:42 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:48.540 12:03:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.540 12:03:42 -- common/autotest_common.sh@10 -- # set +x 00:25:48.540 12:03:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.540 12:03:42 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:48.540 12:03:42 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:48.540 12:03:42 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:48.540 12:03:42 -- target/shutdown.sh@57 -- # local ret=1 00:25:48.540 12:03:42 -- target/shutdown.sh@58 -- # local i 00:25:48.540 12:03:42 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:48.540 12:03:42 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:48.540 12:03:42 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:48.540 12:03:42 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:48.540 12:03:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.540 12:03:42 -- common/autotest_common.sh@10 -- # set +x 00:25:48.540 12:03:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.540 12:03:42 -- target/shutdown.sh@60 -- # read_io_count=254 00:25:48.540 12:03:42 -- target/shutdown.sh@63 -- # '[' 254 -ge 100 ']' 00:25:48.540 12:03:42 -- target/shutdown.sh@64 -- # ret=0 00:25:48.540 12:03:42 -- target/shutdown.sh@65 -- # break 00:25:48.540 12:03:42 -- target/shutdown.sh@69 -- # return 0 00:25:48.540 12:03:42 -- target/shutdown.sh@109 -- # killprocess 2058710 00:25:48.540 12:03:42 -- common/autotest_common.sh@926 -- # '[' -z 2058710 ']' 00:25:48.540 12:03:42 -- common/autotest_common.sh@930 -- # kill -0 2058710 00:25:48.540 12:03:42 -- common/autotest_common.sh@931 -- # uname 00:25:48.540 12:03:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:48.540 12:03:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2058710 00:25:48.540 12:03:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:48.540 12:03:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:48.540 12:03:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2058710' 00:25:48.540 killing process with pid 2058710 00:25:48.540 12:03:42 -- common/autotest_common.sh@945 -- # kill 2058710 00:25:48.540 12:03:42 -- common/autotest_common.sh@950 -- # wait 2058710 00:25:48.540 Received shutdown signal, test time was about 0.854380 seconds 00:25:48.540 00:25:48.540 Latency(us) 00:25:48.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.540 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.540 Verification LBA range: start 0x0 length 0x400 00:25:48.540 Nvme1n1 : 0.80 442.27 27.64 0.00 0.00 141334.23 19333.12 134567.25 00:25:48.540 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.540 Verification LBA range: start 0x0 length 0x400 00:25:48.540 Nvme2n1 : 0.80 393.64 24.60 0.00 0.00 157087.71 22173.01 164276.91 00:25:48.540 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.540 Verification LBA range: start 0x0 length 0x400 00:25:48.540 Nvme3n1 : 0.80 441.61 27.60 0.00 0.00 138595.26 18022.40 132819.63 00:25:48.541 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.541 Verification LBA range: start 0x0 length 0x400 00:25:48.541 Nvme4n1 : 0.82 431.42 26.96 0.00 0.00 142058.54 6662.83 126702.93 00:25:48.541 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.541 Verification LBA range: start 0x0 length 0x400 00:25:48.541 Nvme5n1 : 0.82 441.47 27.59 0.00 0.00 136246.31 17694.72 118838.61 00:25:48.541 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.541 Verification LBA range: start 0x0 length 0x400 00:25:48.541 Nvme6n1 : 0.81 438.26 27.39 0.00 0.00 135435.99 20643.84 124081.49 00:25:48.541 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.541 Verification LBA range: start 0x0 length 0x400 00:25:48.541 Nvme7n1 : 0.81 440.84 27.55 0.00 0.00 133298.53 19114.67 116217.17 00:25:48.541 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.541 Verification LBA range: start 0x0 length 0x400 00:25:48.541 Nvme8n1 : 0.80 444.61 27.79 0.00 0.00 130975.23 17257.81 137188.69 00:25:48.541 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.541 Verification LBA range: start 0x0 length 0x400 00:25:48.541 Nvme9n1 : 0.81 442.44 27.65 0.00 0.00 129982.64 11741.87 106605.23 00:25:48.541 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:48.541 Verification LBA range: start 0x0 length 0x400 00:25:48.541 Nvme10n1 : 0.85 415.93 26.00 0.00 0.00 130746.63 17367.04 109226.67 00:25:48.541 =================================================================================================================== 00:25:48.541 Total : 4332.48 270.78 0.00 0.00 137346.65 6662.83 164276.91 00:25:48.801 12:03:42 -- target/shutdown.sh@112 -- # sleep 1 00:25:49.743 12:03:43 -- target/shutdown.sh@113 -- # kill -0 2058328 00:25:49.743 12:03:43 -- target/shutdown.sh@115 -- # stoptarget 00:25:49.743 12:03:43 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:49.743 12:03:43 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:49.743 12:03:43 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:49.743 12:03:43 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:49.743 12:03:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:49.743 12:03:43 -- nvmf/common.sh@116 -- # sync 00:25:49.743 12:03:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:49.743 12:03:43 -- nvmf/common.sh@119 -- # set +e 00:25:49.743 12:03:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:49.743 12:03:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:49.743 rmmod nvme_tcp 00:25:49.743 rmmod nvme_fabrics 00:25:49.743 rmmod nvme_keyring 00:25:50.003 12:03:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:50.003 12:03:43 -- nvmf/common.sh@123 -- # set -e 00:25:50.003 12:03:43 -- nvmf/common.sh@124 -- # return 0 00:25:50.003 12:03:43 -- nvmf/common.sh@477 -- # '[' -n 2058328 ']' 00:25:50.003 12:03:43 -- nvmf/common.sh@478 -- # killprocess 2058328 00:25:50.003 12:03:43 -- common/autotest_common.sh@926 -- # '[' -z 2058328 ']' 00:25:50.003 12:03:43 -- common/autotest_common.sh@930 -- # kill -0 2058328 00:25:50.003 12:03:43 -- common/autotest_common.sh@931 -- # uname 00:25:50.003 12:03:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:50.003 12:03:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2058328 00:25:50.003 12:03:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:50.003 12:03:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:50.003 12:03:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2058328' 00:25:50.003 killing process with pid 2058328 00:25:50.003 12:03:43 -- common/autotest_common.sh@945 -- # kill 2058328 00:25:50.003 12:03:43 -- common/autotest_common.sh@950 -- # wait 2058328 00:25:50.263 12:03:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:50.264 12:03:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:50.264 12:03:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:50.264 12:03:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.264 12:03:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:50.264 12:03:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.264 12:03:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.264 12:03:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.225 12:03:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:52.225 00:25:52.225 real 0m7.693s 00:25:52.225 user 0m22.882s 00:25:52.225 sys 0m1.247s 00:25:52.225 12:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.225 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:25:52.225 ************************************ 00:25:52.225 END TEST nvmf_shutdown_tc2 00:25:52.225 ************************************ 00:25:52.225 12:03:45 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:52.225 12:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:52.225 12:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:52.225 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:25:52.225 ************************************ 00:25:52.225 START TEST nvmf_shutdown_tc3 00:25:52.225 ************************************ 00:25:52.225 12:03:45 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:52.225 12:03:45 -- target/shutdown.sh@120 -- # starttarget 00:25:52.225 12:03:45 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:52.225 12:03:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:52.225 12:03:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.225 12:03:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:52.225 12:03:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:52.225 12:03:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:52.225 12:03:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.225 12:03:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.225 12:03:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.225 12:03:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:52.225 12:03:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:52.225 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:25:52.225 12:03:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:52.225 12:03:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:52.225 12:03:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:52.225 12:03:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:52.225 12:03:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:52.225 12:03:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:52.225 12:03:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:52.225 12:03:45 -- nvmf/common.sh@294 -- # net_devs=() 00:25:52.225 12:03:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:52.225 12:03:45 -- nvmf/common.sh@295 -- # e810=() 00:25:52.225 12:03:45 -- nvmf/common.sh@295 -- # local -ga e810 00:25:52.225 12:03:45 -- nvmf/common.sh@296 -- # x722=() 00:25:52.225 12:03:45 -- nvmf/common.sh@296 -- # local -ga x722 00:25:52.225 12:03:45 -- nvmf/common.sh@297 -- # mlx=() 00:25:52.225 12:03:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:52.225 12:03:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.225 12:03:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:52.225 12:03:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:52.225 12:03:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:52.225 12:03:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:52.225 12:03:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:52.225 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:52.225 12:03:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:52.225 12:03:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:52.225 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:52.225 12:03:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:52.225 12:03:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:52.226 12:03:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.226 12:03:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.226 12:03:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:52.226 12:03:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:52.226 12:03:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:52.226 12:03:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:52.226 12:03:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:52.226 12:03:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.226 12:03:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:52.226 12:03:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.226 12:03:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:52.226 Found net devices under 0000:31:00.0: cvl_0_0 00:25:52.226 12:03:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.226 12:03:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:52.226 12:03:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.226 12:03:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:52.226 12:03:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.226 12:03:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:52.226 Found net devices under 0000:31:00.1: cvl_0_1 00:25:52.226 12:03:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.226 12:03:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:52.226 12:03:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:52.226 12:03:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:52.226 12:03:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:52.226 12:03:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:52.226 12:03:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.226 12:03:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.226 12:03:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.226 12:03:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:52.226 12:03:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.226 12:03:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.226 12:03:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:52.226 12:03:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.226 12:03:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.226 12:03:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:52.226 12:03:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:52.226 12:03:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.226 12:03:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.487 12:03:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.487 12:03:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.487 12:03:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:52.487 12:03:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.487 12:03:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.487 12:03:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.748 12:03:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:52.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:25:52.748 00:25:52.748 --- 10.0.0.2 ping statistics --- 00:25:52.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.748 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:25:52.748 12:03:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:25:52.748 00:25:52.748 --- 10.0.0.1 ping statistics --- 00:25:52.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.748 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:25:52.748 12:03:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.748 12:03:46 -- nvmf/common.sh@410 -- # return 0 00:25:52.748 12:03:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:52.748 12:03:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.748 12:03:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:52.748 12:03:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:52.748 12:03:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.748 12:03:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:52.748 12:03:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:52.748 12:03:46 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:52.748 12:03:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:52.748 12:03:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:52.748 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:25:52.748 12:03:46 -- nvmf/common.sh@469 -- # nvmfpid=2060086 00:25:52.748 12:03:46 -- nvmf/common.sh@470 -- # waitforlisten 2060086 00:25:52.748 12:03:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:52.748 12:03:46 -- common/autotest_common.sh@819 -- # '[' -z 2060086 ']' 00:25:52.748 12:03:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.748 12:03:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:52.748 12:03:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.748 12:03:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:52.748 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:25:52.748 [2024-06-10 12:03:46.387002] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:52.748 [2024-06-10 12:03:46.387067] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.748 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.748 [2024-06-10 12:03:46.476007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.009 [2024-06-10 12:03:46.548531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:53.009 [2024-06-10 12:03:46.548638] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.009 [2024-06-10 12:03:46.548645] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.009 [2024-06-10 12:03:46.548651] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.009 [2024-06-10 12:03:46.548760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.009 [2024-06-10 12:03:46.548912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.009 [2024-06-10 12:03:46.549033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.009 [2024-06-10 12:03:46.549035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:53.580 12:03:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:53.580 12:03:47 -- common/autotest_common.sh@852 -- # return 0 00:25:53.580 12:03:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:53.580 12:03:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:53.580 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:25:53.580 12:03:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.580 12:03:47 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:53.580 12:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.580 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:25:53.580 [2024-06-10 12:03:47.197187] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.580 12:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.580 12:03:47 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:53.580 12:03:47 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:53.580 12:03:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:53.580 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:25:53.580 12:03:47 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:53.580 12:03:47 -- target/shutdown.sh@28 -- # cat 00:25:53.580 12:03:47 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:53.580 12:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.580 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:25:53.580 Malloc1 00:25:53.580 [2024-06-10 12:03:47.296037] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.580 Malloc2 00:25:53.840 Malloc3 00:25:53.840 Malloc4 00:25:53.840 Malloc5 00:25:53.840 Malloc6 00:25:53.840 Malloc7 00:25:53.840 Malloc8 00:25:53.840 Malloc9 00:25:54.100 Malloc10 00:25:54.100 12:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.100 12:03:47 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:54.100 12:03:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:54.100 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:25:54.100 12:03:47 -- target/shutdown.sh@124 -- # perfpid=2060293 00:25:54.100 12:03:47 -- target/shutdown.sh@125 -- # waitforlisten 2060293 /var/tmp/bdevperf.sock 00:25:54.100 12:03:47 -- common/autotest_common.sh@819 -- # '[' -z 2060293 ']' 00:25:54.100 12:03:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:54.100 12:03:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:54.100 12:03:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:54.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:54.100 12:03:47 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:54.100 12:03:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:54.100 12:03:47 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:54.100 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:25:54.100 12:03:47 -- nvmf/common.sh@520 -- # config=() 00:25:54.100 12:03:47 -- nvmf/common.sh@520 -- # local subsystem config 00:25:54.100 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.100 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.100 { 00:25:54.100 "params": { 00:25:54.100 "name": "Nvme$subsystem", 00:25:54.100 "trtype": "$TEST_TRANSPORT", 00:25:54.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.100 "adrfam": "ipv4", 00:25:54.100 "trsvcid": "$NVMF_PORT", 00:25:54.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.100 "hdgst": ${hdgst:-false}, 00:25:54.100 "ddgst": ${ddgst:-false} 00:25:54.100 }, 00:25:54.100 "method": "bdev_nvme_attach_controller" 00:25:54.100 } 00:25:54.100 EOF 00:25:54.100 )") 00:25:54.100 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.100 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.100 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.100 { 00:25:54.100 "params": { 00:25:54.100 "name": "Nvme$subsystem", 00:25:54.100 "trtype": "$TEST_TRANSPORT", 00:25:54.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.100 "adrfam": "ipv4", 00:25:54.100 "trsvcid": "$NVMF_PORT", 00:25:54.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.100 "hdgst": ${hdgst:-false}, 00:25:54.100 "ddgst": ${ddgst:-false} 00:25:54.100 }, 00:25:54.100 "method": "bdev_nvme_attach_controller" 00:25:54.100 } 00:25:54.101 EOF 00:25:54.101 )") 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.101 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.101 { 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme$subsystem", 00:25:54.101 "trtype": "$TEST_TRANSPORT", 00:25:54.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "$NVMF_PORT", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.101 "hdgst": ${hdgst:-false}, 00:25:54.101 "ddgst": ${ddgst:-false} 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 } 00:25:54.101 EOF 00:25:54.101 )") 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.101 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.101 { 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme$subsystem", 00:25:54.101 "trtype": "$TEST_TRANSPORT", 00:25:54.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "$NVMF_PORT", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.101 "hdgst": ${hdgst:-false}, 00:25:54.101 "ddgst": ${ddgst:-false} 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 } 00:25:54.101 EOF 00:25:54.101 )") 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.101 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.101 { 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme$subsystem", 00:25:54.101 "trtype": "$TEST_TRANSPORT", 00:25:54.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "$NVMF_PORT", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.101 "hdgst": ${hdgst:-false}, 00:25:54.101 "ddgst": ${ddgst:-false} 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 } 00:25:54.101 EOF 00:25:54.101 )") 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.101 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.101 { 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme$subsystem", 00:25:54.101 "trtype": "$TEST_TRANSPORT", 00:25:54.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "$NVMF_PORT", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.101 "hdgst": ${hdgst:-false}, 00:25:54.101 "ddgst": ${ddgst:-false} 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 } 00:25:54.101 EOF 00:25:54.101 )") 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.101 [2024-06-10 12:03:47.733950] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:54.101 [2024-06-10 12:03:47.734006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2060293 ] 00:25:54.101 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.101 { 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme$subsystem", 00:25:54.101 "trtype": "$TEST_TRANSPORT", 00:25:54.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "$NVMF_PORT", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.101 "hdgst": ${hdgst:-false}, 00:25:54.101 "ddgst": ${ddgst:-false} 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 } 00:25:54.101 EOF 00:25:54.101 )") 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.101 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.101 { 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme$subsystem", 00:25:54.101 "trtype": "$TEST_TRANSPORT", 00:25:54.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "$NVMF_PORT", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.101 "hdgst": ${hdgst:-false}, 00:25:54.101 "ddgst": ${ddgst:-false} 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 } 00:25:54.101 EOF 00:25:54.101 )") 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.101 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.101 { 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme$subsystem", 00:25:54.101 "trtype": "$TEST_TRANSPORT", 00:25:54.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "$NVMF_PORT", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.101 "hdgst": ${hdgst:-false}, 00:25:54.101 "ddgst": ${ddgst:-false} 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 } 00:25:54.101 EOF 00:25:54.101 )") 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.101 12:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.101 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.101 { 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme$subsystem", 00:25:54.101 "trtype": "$TEST_TRANSPORT", 00:25:54.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "$NVMF_PORT", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.101 "hdgst": ${hdgst:-false}, 00:25:54.101 "ddgst": ${ddgst:-false} 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 } 00:25:54.101 EOF 00:25:54.101 )") 00:25:54.101 12:03:47 -- nvmf/common.sh@542 -- # cat 00:25:54.101 12:03:47 -- nvmf/common.sh@544 -- # jq . 00:25:54.101 12:03:47 -- nvmf/common.sh@545 -- # IFS=, 00:25:54.101 12:03:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme1", 00:25:54.101 "trtype": "tcp", 00:25:54.101 "traddr": "10.0.0.2", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "4420", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:54.101 "hdgst": false, 00:25:54.101 "ddgst": false 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 },{ 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme2", 00:25:54.101 "trtype": "tcp", 00:25:54.101 "traddr": "10.0.0.2", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "4420", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:54.101 "hdgst": false, 00:25:54.101 "ddgst": false 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 },{ 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme3", 00:25:54.101 "trtype": "tcp", 00:25:54.101 "traddr": "10.0.0.2", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "4420", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:54.101 "hdgst": false, 00:25:54.101 "ddgst": false 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 },{ 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme4", 00:25:54.101 "trtype": "tcp", 00:25:54.101 "traddr": "10.0.0.2", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "4420", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:54.101 "hdgst": false, 00:25:54.101 "ddgst": false 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 },{ 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme5", 00:25:54.101 "trtype": "tcp", 00:25:54.101 "traddr": "10.0.0.2", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "4420", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:54.101 "hdgst": false, 00:25:54.101 "ddgst": false 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 },{ 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme6", 00:25:54.101 "trtype": "tcp", 00:25:54.101 "traddr": "10.0.0.2", 00:25:54.101 "adrfam": "ipv4", 00:25:54.101 "trsvcid": "4420", 00:25:54.101 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:54.101 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:54.101 "hdgst": false, 00:25:54.101 "ddgst": false 00:25:54.101 }, 00:25:54.101 "method": "bdev_nvme_attach_controller" 00:25:54.101 },{ 00:25:54.101 "params": { 00:25:54.101 "name": "Nvme7", 00:25:54.101 "trtype": "tcp", 00:25:54.101 "traddr": "10.0.0.2", 00:25:54.102 "adrfam": "ipv4", 00:25:54.102 "trsvcid": "4420", 00:25:54.102 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:54.102 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:54.102 "hdgst": false, 00:25:54.102 "ddgst": false 00:25:54.102 }, 00:25:54.102 "method": "bdev_nvme_attach_controller" 00:25:54.102 },{ 00:25:54.102 "params": { 00:25:54.102 "name": "Nvme8", 00:25:54.102 "trtype": "tcp", 00:25:54.102 "traddr": "10.0.0.2", 00:25:54.102 "adrfam": "ipv4", 00:25:54.102 "trsvcid": "4420", 00:25:54.102 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:54.102 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:54.102 "hdgst": false, 00:25:54.102 "ddgst": false 00:25:54.102 }, 00:25:54.102 "method": "bdev_nvme_attach_controller" 00:25:54.102 },{ 00:25:54.102 "params": { 00:25:54.102 "name": "Nvme9", 00:25:54.102 "trtype": "tcp", 00:25:54.102 "traddr": "10.0.0.2", 00:25:54.102 "adrfam": "ipv4", 00:25:54.102 "trsvcid": "4420", 00:25:54.102 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:54.102 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:54.102 "hdgst": false, 00:25:54.102 "ddgst": false 00:25:54.102 }, 00:25:54.102 "method": "bdev_nvme_attach_controller" 00:25:54.102 },{ 00:25:54.102 "params": { 00:25:54.102 "name": "Nvme10", 00:25:54.102 "trtype": "tcp", 00:25:54.102 "traddr": "10.0.0.2", 00:25:54.102 "adrfam": "ipv4", 00:25:54.102 "trsvcid": "4420", 00:25:54.102 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:54.102 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:54.102 "hdgst": false, 00:25:54.102 "ddgst": false 00:25:54.102 }, 00:25:54.102 "method": "bdev_nvme_attach_controller" 00:25:54.102 }' 00:25:54.102 [2024-06-10 12:03:47.794666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.102 [2024-06-10 12:03:47.857767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.013 Running I/O for 10 seconds... 00:25:56.289 12:03:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:56.289 12:03:49 -- common/autotest_common.sh@852 -- # return 0 00:25:56.289 12:03:49 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:56.289 12:03:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.289 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:25:56.289 12:03:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.289 12:03:49 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:56.289 12:03:49 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:56.289 12:03:49 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:56.289 12:03:49 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:56.289 12:03:49 -- target/shutdown.sh@57 -- # local ret=1 00:25:56.289 12:03:49 -- target/shutdown.sh@58 -- # local i 00:25:56.289 12:03:49 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:56.289 12:03:49 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:56.289 12:03:49 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:56.289 12:03:49 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:56.289 12:03:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.289 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:25:56.289 12:03:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.289 12:03:49 -- target/shutdown.sh@60 -- # read_io_count=173 00:25:56.289 12:03:49 -- target/shutdown.sh@63 -- # '[' 173 -ge 100 ']' 00:25:56.289 12:03:49 -- target/shutdown.sh@64 -- # ret=0 00:25:56.289 12:03:49 -- target/shutdown.sh@65 -- # break 00:25:56.289 12:03:49 -- target/shutdown.sh@69 -- # return 0 00:25:56.289 12:03:49 -- target/shutdown.sh@134 -- # killprocess 2060086 00:25:56.289 12:03:49 -- common/autotest_common.sh@926 -- # '[' -z 2060086 ']' 00:25:56.289 12:03:49 -- common/autotest_common.sh@930 -- # kill -0 2060086 00:25:56.289 12:03:49 -- common/autotest_common.sh@931 -- # uname 00:25:56.289 12:03:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:56.289 12:03:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2060086 00:25:56.289 12:03:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:56.289 12:03:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:56.289 12:03:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2060086' 00:25:56.289 killing process with pid 2060086 00:25:56.289 12:03:49 -- common/autotest_common.sh@945 -- # kill 2060086 00:25:56.289 12:03:49 -- common/autotest_common.sh@950 -- # wait 2060086 00:25:56.289 [2024-06-10 12:03:49.913192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.289 [2024-06-10 12:03:49.913422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.913518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb470 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.914778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8edde0 is same with the state(5) to be set 00:25:56.290 [2024-06-10 12:03:49.915457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.290 [2024-06-10 12:03:49.915498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.290 [2024-06-10 12:03:49.915517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.290 [2024-06-10 12:03:49.915525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.290 [2024-06-10 12:03:49.915532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.290 [2024-06-10 12:03:49.915540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.290 [2024-06-10 12:03:49.915548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.290 [2024-06-10 12:03:49.915555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.290 [2024-06-10 12:03:49.915563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147d260 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.919923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb920 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.920962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.920982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.920989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.920993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.920998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.291 [2024-06-10 12:03:49.921082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.921209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebdd0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.922414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec6f0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.923060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.923071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.292 [2024-06-10 12:03:49.923076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.923353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ecba0 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.924259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.924274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.924279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.924283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.924288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.924292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.924303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.293 [2024-06-10 12:03:49.924307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed030 is same with the state(5) to be set 00:25:56.294 [2024-06-10 12:03:49.924785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.924980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.924989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.925000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.925009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.925016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.925026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.294 [2024-06-10 12:03:49.925033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.294 [2024-06-10 12:03:49.925042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t[2024-06-10 12:03:49.925197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:56.295 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 12:03:49.925235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 he state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t[2024-06-10 12:03:49.925261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:56.295 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31360 len:12[2024-06-10 12:03:49.925274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 he state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t[2024-06-10 12:03:49.925311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28288 len:12he state(5) to be set 00:25:56.295 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t[2024-06-10 12:03:49.925339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:56.295 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t[2024-06-10 12:03:49.925350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28672 len:12he state(5) to be set 00:25:56.295 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t[2024-06-10 12:03:49.925377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:56.295 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31488 len:1[2024-06-10 12:03:49.925391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 he state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.295 [2024-06-10 12:03:49.925414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.295 [2024-06-10 12:03:49.925417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.295 [2024-06-10 12:03:49.925419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 12:03:49.925455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 he state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32000 len:12[2024-06-10 12:03:49.925466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 he state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 12:03:49.925493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 he state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t[2024-06-10 12:03:49.925511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:56.296 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32384 len:12[2024-06-10 12:03:49.925524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 he state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed4c0 is same with t[2024-06-10 12:03:49.925531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:56.296 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.296 [2024-06-10 12:03:49.925909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.296 [2024-06-10 12:03:49.925962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.296 [2024-06-10 12:03:49.925985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.925990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.925994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.925998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926227] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15dfc60 was disconnected and freed. reset controller. 00:25:56.297 [2024-06-10 12:03:49.926301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147d260 (9): Bad file descriptor 00:25:56.297 [2024-06-10 12:03:49.926334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffc0 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642f20 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556640 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163b1d0 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643aa0 is same with the state(5) to be set 00:25:56.297 [2024-06-10 12:03:49.926746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.297 [2024-06-10 12:03:49.926762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.297 [2024-06-10 12:03:49.926769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0a50 is same with the state(5) to be set 00:25:56.298 [2024-06-10 12:03:49.926828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0170 is same with the state(5) to be set 00:25:56.298 [2024-06-10 12:03:49.926922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.298 [2024-06-10 12:03:49.926975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.926981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556210 is same with the state(5) to be set 00:25:56.298 [2024-06-10 12:03:49.927398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.298 [2024-06-10 12:03:49.927776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-06-10 12:03:49.927783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.927990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.927997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.928006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.928013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.928022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.928029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.928038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.936107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.936258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ed950 is same with the state(5) to be set 00:25:56.299 [2024-06-10 12:03:49.941787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.941984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.941991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.942000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.942007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.299 [2024-06-10 12:03:49.942017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-06-10 12:03:49.942024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151cd20 is same with the state(5) to be set 00:25:56.300 [2024-06-10 12:03:49.942314] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x151cd20 was disconnected and freed. reset controller. 00:25:56.300 [2024-06-10 12:03:49.942353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-06-10 12:03:49.942777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.300 [2024-06-10 12:03:49.942787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.942982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.942989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943470] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x151db60 was disconnected and freed. reset controller. 00:25:56.301 [2024-06-10 12:03:49.943573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.301 [2024-06-10 12:03:49.943596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.301 [2024-06-10 12:03:49.943603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.943992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.943999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.302 [2024-06-10 12:03:49.944155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.302 [2024-06-10 12:03:49.944162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.944395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.944404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.948854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.948922] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15dbeb0 was disconnected and freed. reset controller. 00:25:56.303 [2024-06-10 12:03:49.950300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffc0 (9): Bad file descriptor 00:25:56.303 [2024-06-10 12:03:49.950328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1642f20 (9): Bad file descriptor 00:25:56.303 [2024-06-10 12:03:49.950347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1556640 (9): Bad file descriptor 00:25:56.303 [2024-06-10 12:03:49.950361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163b1d0 (9): Bad file descriptor 00:25:56.303 [2024-06-10 12:03:49.950374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643aa0 (9): Bad file descriptor 00:25:56.303 [2024-06-10 12:03:49.950390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0a50 (9): Bad file descriptor 00:25:56.303 [2024-06-10 12:03:49.950404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0170 (9): Bad file descriptor 00:25:56.303 [2024-06-10 12:03:49.950434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.303 [2024-06-10 12:03:49.950446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.950454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.303 [2024-06-10 12:03:49.950461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.950469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.303 [2024-06-10 12:03:49.950476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.950484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.303 [2024-06-10 12:03:49.950491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.950498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477790 is same with the state(5) to be set 00:25:56.303 [2024-06-10 12:03:49.950517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1556210 (9): Bad file descriptor 00:25:56.303 [2024-06-10 12:03:49.950751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.950768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.950781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.950789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.950803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.303 [2024-06-10 12:03:49.950810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.303 [2024-06-10 12:03:49.950820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.950986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.950993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.304 [2024-06-10 12:03:49.951494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.304 [2024-06-10 12:03:49.951501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.951829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.951892] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x151b7a0 was disconnected and freed. reset controller. 00:25:56.305 [2024-06-10 12:03:49.955550] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:56.305 [2024-06-10 12:03:49.955628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.305 [2024-06-10 12:03:49.955931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.305 [2024-06-10 12:03:49.955938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.955947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.955954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.955964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.955970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.955980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.955987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.955996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.306 [2024-06-10 12:03:49.956557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.306 [2024-06-10 12:03:49.956566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.956573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.956582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.956589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.956598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.956605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.956614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.956621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.956630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.956637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.956646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.956654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.956663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.956671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.956681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.956688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.957937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.957950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.957963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.957972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.957983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.957991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.307 [2024-06-10 12:03:49.958366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.307 [2024-06-10 12:03:49.958375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.958985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.958992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.308 [2024-06-10 12:03:49.959001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.308 [2024-06-10 12:03:49.959008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.959068] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x151a220 was disconnected and freed. reset controller. 00:25:56.309 [2024-06-10 12:03:49.960688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:56.309 [2024-06-10 12:03:49.960712] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:56.309 [2024-06-10 12:03:49.961147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-06-10 12:03:49.961672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-06-10 12:03:49.961712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163b1d0 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-06-10 12:03:49.961723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163b1d0 is same with the state(5) to be set 00:25:56.309 [2024-06-10 12:03:49.961758] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.309 [2024-06-10 12:03:49.961769] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.309 [2024-06-10 12:03:49.961787] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.309 [2024-06-10 12:03:49.961801] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.309 [2024-06-10 12:03:49.961817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1477790 (9): Bad file descriptor 00:25:56.309 [2024-06-10 12:03:49.963624] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:56.309 [2024-06-10 12:03:49.964017] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-06-10 12:03:49.964037] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:56.309 [2024-06-10 12:03:49.964497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-06-10 12:03:49.964878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-06-10 12:03:49.964891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0170 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-06-10 12:03:49.964901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0170 is same with the state(5) to be set 00:25:56.309 [2024-06-10 12:03:49.965456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-06-10 12:03:49.965884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-06-10 12:03:49.965896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0a50 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-06-10 12:03:49.965906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0a50 is same with the state(5) to be set 00:25:56.309 [2024-06-10 12:03:49.965926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163b1d0 (9): Bad file descriptor 00:25:56.309 [2024-06-10 12:03:49.966542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.966985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.966992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.967002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.967009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.309 [2024-06-10 12:03:49.967018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.309 [2024-06-10 12:03:49.967025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.967610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.967618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15da8d0 is same with the state(5) to be set 00:25:56.310 [2024-06-10 12:03:49.968883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.968898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.968911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.310 [2024-06-10 12:03:49.968920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.310 [2024-06-10 12:03:49.968931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.968940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.968951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.968959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.968970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.968979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.968989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.968998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-06-10 12:03:49.969511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.311 [2024-06-10 12:03:49.969520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.969968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.312 [2024-06-10 12:03:49.969975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.312 [2024-06-10 12:03:49.971226] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:56.312 [2024-06-10 12:03:49.971266] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:56.312 [2024-06-10 12:03:49.971281] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:56.312 [2024-06-10 12:03:49.971294] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:56.312 [2024-06-10 12:03:49.971306] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:56.312 [2024-06-10 12:03:49.971729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-06-10 12:03:49.972010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-06-10 12:03:49.972021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147d260 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-06-10 12:03:49.972029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147d260 is same with the state(5) to be set 00:25:56.312 [2024-06-10 12:03:49.972391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-06-10 12:03:49.972777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-06-10 12:03:49.972787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1556640 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-06-10 12:03:49.972794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556640 is same with the state(5) to be set 00:25:56.312 [2024-06-10 12:03:49.972804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0170 (9): Bad file descriptor 00:25:56.312 [2024-06-10 12:03:49.972814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0a50 (9): Bad file descriptor 00:25:56.312 [2024-06-10 12:03:49.972823] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:56.312 [2024-06-10 12:03:49.972829] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:56.312 [2024-06-10 12:03:49.972837] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:56.312 [2024-06-10 12:03:49.972866] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.312 [2024-06-10 12:03:49.972891] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.312 [2024-06-10 12:03:49.972902] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.312 [2024-06-10 12:03:49.973258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-06-10 12:03:49.973683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-06-10 12:03:49.974069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-06-10 12:03:49.974078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffc0 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-06-10 12:03:49.974085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffc0 is same with the state(5) to be set 00:25:56.313 [2024-06-10 12:03:49.974338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-06-10 12:03:49.974737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-06-10 12:03:49.974747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643aa0 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-06-10 12:03:49.974755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643aa0 is same with the state(5) to be set 00:25:56.313 [2024-06-10 12:03:49.975145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-06-10 12:03:49.975538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-06-10 12:03:49.975548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1642f20 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-06-10 12:03:49.975556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642f20 is same with the state(5) to be set 00:25:56.313 [2024-06-10 12:03:49.975949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-06-10 12:03:49.976351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-06-10 12:03:49.976362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1556210 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-06-10 12:03:49.976369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556210 is same with the state(5) to be set 00:25:56.313 [2024-06-10 12:03:49.976378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147d260 (9): Bad file descriptor 00:25:56.313 [2024-06-10 12:03:49.976387] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1556640 (9): Bad file descriptor 00:25:56.313 [2024-06-10 12:03:49.976395] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:56.313 [2024-06-10 12:03:49.976402] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:56.313 [2024-06-10 12:03:49.976409] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:56.313 [2024-06-10 12:03:49.976419] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:56.313 [2024-06-10 12:03:49.976425] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:56.313 [2024-06-10 12:03:49.976432] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:56.313 [2024-06-10 12:03:49.977017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.313 [2024-06-10 12:03:49.977524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.313 [2024-06-10 12:03:49.977533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.977987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.977997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.978004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.978013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.978020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.978030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.978037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.978046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.978053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.978063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.978070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.978079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.314 [2024-06-10 12:03:49.978086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.314 [2024-06-10 12:03:49.978094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de650 is same with the state(5) to be set 00:25:56.314 [2024-06-10 12:03:49.980005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-06-10 12:03:49.980025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 task offset: 29184 on job bdev=Nvme10n1 fails 00:25:56.314 00:25:56.314 Latency(us) 00:25:56.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.314 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.314 Job: Nvme1n1 ended in about 0.66 seconds with error 00:25:56.314 Verification LBA range: start 0x0 length 0x400 00:25:56.314 Nvme1n1 : 0.66 320.29 20.02 96.69 0.00 152284.53 17694.72 145053.01 00:25:56.314 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.314 Job: Nvme2n1 ended in about 0.67 seconds with error 00:25:56.314 Verification LBA range: start 0x0 length 0x400 00:25:56.314 Nvme2n1 : 0.67 376.28 23.52 95.94 0.00 132875.11 47404.37 129324.37 00:25:56.314 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.314 Job: Nvme3n1 ended in about 0.66 seconds with error 00:25:56.314 Verification LBA range: start 0x0 length 0x400 00:25:56.314 Nvme3n1 : 0.66 377.85 23.62 96.35 0.00 130719.70 62914.56 126702.93 00:25:56.314 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.314 Job: Nvme4n1 ended in about 0.66 seconds with error 00:25:56.314 Verification LBA range: start 0x0 length 0x400 00:25:56.314 Nvme4n1 : 0.66 381.97 23.87 97.39 0.00 127657.71 27743.57 127576.75 00:25:56.315 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.315 Job: Nvme5n1 ended in about 0.66 seconds with error 00:25:56.315 Verification LBA range: start 0x0 length 0x400 00:25:56.315 Nvme5n1 : 0.66 315.97 19.75 97.22 0.00 146367.75 67283.63 134567.25 00:25:56.315 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.315 Job: Nvme6n1 ended in about 0.67 seconds with error 00:25:56.315 Verification LBA range: start 0x0 length 0x400 00:25:56.315 Nvme6n1 : 0.67 309.15 19.32 95.12 0.00 147936.28 84759.89 143305.39 00:25:56.315 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.315 Job: Nvme7n1 ended in about 0.66 seconds with error 00:25:56.315 Verification LBA range: start 0x0 length 0x400 00:25:56.315 Nvme7n1 : 0.66 380.60 23.79 97.04 0.00 123392.50 49370.45 106605.23 00:25:56.315 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.315 Job: Nvme8n1 ended in about 0.68 seconds with error 00:25:56.315 Verification LBA range: start 0x0 length 0x400 00:25:56.315 Nvme8n1 : 0.68 308.08 19.25 94.79 0.00 144755.85 81264.64 137188.69 00:25:56.315 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.315 Job: Nvme9n1 ended in about 0.68 seconds with error 00:25:56.315 Verification LBA range: start 0x0 length 0x400 00:25:56.315 Nvme9n1 : 0.68 304.42 19.03 93.67 0.00 144834.56 82138.45 114469.55 00:25:56.315 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:56.315 Job: Nvme10n1 ended in about 0.65 seconds with error 00:25:56.315 Verification LBA range: start 0x0 length 0x400 00:25:56.315 Nvme10n1 : 0.65 324.01 20.25 97.82 0.00 134226.32 10376.53 115343.36 00:25:56.315 =================================================================================================================== 00:25:56.315 Total : 3398.63 212.41 962.04 0.00 137934.30 10376.53 145053.01 00:25:56.315 [2024-06-10 12:03:50.010045] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:56.315 [2024-06-10 12:03:50.010089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:56.315 [2024-06-10 12:03:50.010124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffc0 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.010137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643aa0 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.010147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1642f20 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.010157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1556210 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.010170] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-06-10 12:03:50.010176] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-06-10 12:03:50.010184] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-06-10 12:03:50.010198] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:56.315 [2024-06-10 12:03:50.010205] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:56.315 [2024-06-10 12:03:50.010212] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:56.315 [2024-06-10 12:03:50.010230] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.315 [2024-06-10 12:03:50.010251] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.315 [2024-06-10 12:03:50.010350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-06-10 12:03:50.010360] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-06-10 12:03:50.010686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.011131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.011142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1477790 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-06-10 12:03:50.011151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477790 is same with the state(5) to be set 00:25:56.315 [2024-06-10 12:03:50.011159] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:56.315 [2024-06-10 12:03:50.011165] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:56.315 [2024-06-10 12:03:50.011172] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:56.315 [2024-06-10 12:03:50.011182] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:56.315 [2024-06-10 12:03:50.011189] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:56.315 [2024-06-10 12:03:50.011195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:56.315 [2024-06-10 12:03:50.011207] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:56.315 [2024-06-10 12:03:50.011213] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:56.315 [2024-06-10 12:03:50.011220] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:56.315 [2024-06-10 12:03:50.011230] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:56.315 [2024-06-10 12:03:50.011236] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:56.315 [2024-06-10 12:03:50.011248] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:56.315 [2024-06-10 12:03:50.011277] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.315 [2024-06-10 12:03:50.011288] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.315 [2024-06-10 12:03:50.011298] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.315 [2024-06-10 12:03:50.011308] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.315 [2024-06-10 12:03:50.011792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-06-10 12:03:50.011810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-06-10 12:03:50.011842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-06-10 12:03:50.011866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-06-10 12:03:50.011922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1477790 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.012021] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:56.315 [2024-06-10 12:03:50.012036] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:56.315 [2024-06-10 12:03:50.012056] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:56.315 [2024-06-10 12:03:50.012100] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:56.315 [2024-06-10 12:03:50.012110] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:56.315 [2024-06-10 12:03:50.012119] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:56.315 [2024-06-10 12:03:50.012153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:56.315 [2024-06-10 12:03:50.012166] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-06-10 12:03:50.012185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-06-10 12:03:50.012569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.012890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.012900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163b1d0 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-06-10 12:03:50.012907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163b1d0 is same with the state(5) to be set 00:25:56.315 [2024-06-10 12:03:50.013258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.013603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.013613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0a50 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-06-10 12:03:50.013620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0a50 is same with the state(5) to be set 00:25:56.315 [2024-06-10 12:03:50.013829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.014135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.014144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0170 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-06-10 12:03:50.014151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0170 is same with the state(5) to be set 00:25:56.315 [2024-06-10 12:03:50.014328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.014663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.014673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1556640 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-06-10 12:03:50.014680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556640 is same with the state(5) to be set 00:25:56.315 [2024-06-10 12:03:50.014927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.015004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:03:50.015013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147d260 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-06-10 12:03:50.015024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147d260 is same with the state(5) to be set 00:25:56.315 [2024-06-10 12:03:50.015033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163b1d0 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.015043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0a50 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.015052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0170 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.015079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1556640 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.015089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147d260 (9): Bad file descriptor 00:25:56.315 [2024-06-10 12:03:50.015097] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:56.315 [2024-06-10 12:03:50.015103] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:56.316 [2024-06-10 12:03:50.015110] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:56.316 [2024-06-10 12:03:50.015120] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:56.316 [2024-06-10 12:03:50.015126] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:56.316 [2024-06-10 12:03:50.015133] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:56.316 [2024-06-10 12:03:50.015142] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:56.316 [2024-06-10 12:03:50.015149] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:56.316 [2024-06-10 12:03:50.015155] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:56.316 [2024-06-10 12:03:50.015192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-06-10 12:03:50.015200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-06-10 12:03:50.015206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-06-10 12:03:50.015212] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:56.316 [2024-06-10 12:03:50.015219] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:56.316 [2024-06-10 12:03:50.015225] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:56.316 [2024-06-10 12:03:50.015234] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-06-10 12:03:50.015241] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-06-10 12:03:50.015253] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-06-10 12:03:50.015279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-06-10 12:03:50.015286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.577 12:03:50 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:56.577 12:03:50 -- target/shutdown.sh@138 -- # sleep 1 00:25:57.519 12:03:51 -- target/shutdown.sh@141 -- # kill -9 2060293 00:25:57.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (2060293) - No such process 00:25:57.519 12:03:51 -- target/shutdown.sh@141 -- # true 00:25:57.519 12:03:51 -- target/shutdown.sh@143 -- # stoptarget 00:25:57.519 12:03:51 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:57.519 12:03:51 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:57.519 12:03:51 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:57.519 12:03:51 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:57.519 12:03:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:57.519 12:03:51 -- nvmf/common.sh@116 -- # sync 00:25:57.519 12:03:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:57.519 12:03:51 -- nvmf/common.sh@119 -- # set +e 00:25:57.519 12:03:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:57.519 12:03:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:57.520 rmmod nvme_tcp 00:25:57.520 rmmod nvme_fabrics 00:25:57.520 rmmod nvme_keyring 00:25:57.520 12:03:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:57.781 12:03:51 -- nvmf/common.sh@123 -- # set -e 00:25:57.781 12:03:51 -- nvmf/common.sh@124 -- # return 0 00:25:57.781 12:03:51 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:57.781 12:03:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:57.781 12:03:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:57.781 12:03:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:57.781 12:03:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:57.781 12:03:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:57.781 12:03:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.781 12:03:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.781 12:03:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.694 12:03:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:59.695 00:25:59.695 real 0m7.440s 00:25:59.695 user 0m17.254s 00:25:59.695 sys 0m1.164s 00:25:59.695 12:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:59.695 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.695 ************************************ 00:25:59.695 END TEST nvmf_shutdown_tc3 00:25:59.695 ************************************ 00:25:59.695 12:03:53 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:59.695 00:25:59.695 real 0m31.781s 00:25:59.695 user 1m13.714s 00:25:59.695 sys 0m9.103s 00:25:59.695 12:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:59.695 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.695 ************************************ 00:25:59.695 END TEST nvmf_shutdown 00:25:59.695 ************************************ 00:25:59.695 12:03:53 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:25:59.695 12:03:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:59.695 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.956 12:03:53 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:25:59.956 12:03:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:59.956 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.956 12:03:53 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:25:59.956 12:03:53 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:59.956 12:03:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:59.956 12:03:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:59.956 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.956 ************************************ 00:25:59.956 START TEST nvmf_multicontroller 00:25:59.956 ************************************ 00:25:59.956 12:03:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:59.956 * Looking for test storage... 00:25:59.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.956 12:03:53 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.957 12:03:53 -- nvmf/common.sh@7 -- # uname -s 00:25:59.957 12:03:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.957 12:03:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.957 12:03:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.957 12:03:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.957 12:03:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.957 12:03:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.957 12:03:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.957 12:03:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.957 12:03:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.957 12:03:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.957 12:03:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:59.957 12:03:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:59.957 12:03:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.957 12:03:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.957 12:03:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.957 12:03:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.957 12:03:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.957 12:03:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.957 12:03:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.957 12:03:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.957 12:03:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.957 12:03:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.957 12:03:53 -- paths/export.sh@5 -- # export PATH 00:25:59.957 12:03:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.957 12:03:53 -- nvmf/common.sh@46 -- # : 0 00:25:59.957 12:03:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:59.957 12:03:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:59.957 12:03:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:59.957 12:03:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.957 12:03:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.957 12:03:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:59.957 12:03:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:59.957 12:03:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:59.957 12:03:53 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:59.957 12:03:53 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:59.957 12:03:53 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:59.957 12:03:53 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:59.957 12:03:53 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:59.957 12:03:53 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:59.957 12:03:53 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:59.957 12:03:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:59.957 12:03:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.957 12:03:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:59.957 12:03:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:59.957 12:03:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:59.957 12:03:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.957 12:03:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.957 12:03:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.957 12:03:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:59.957 12:03:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:59.957 12:03:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:59.957 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:26:08.098 12:04:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:08.098 12:04:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:08.098 12:04:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:08.098 12:04:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:08.098 12:04:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:08.098 12:04:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:08.098 12:04:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:08.099 12:04:00 -- nvmf/common.sh@294 -- # net_devs=() 00:26:08.099 12:04:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:08.099 12:04:00 -- nvmf/common.sh@295 -- # e810=() 00:26:08.099 12:04:00 -- nvmf/common.sh@295 -- # local -ga e810 00:26:08.099 12:04:00 -- nvmf/common.sh@296 -- # x722=() 00:26:08.099 12:04:00 -- nvmf/common.sh@296 -- # local -ga x722 00:26:08.099 12:04:00 -- nvmf/common.sh@297 -- # mlx=() 00:26:08.099 12:04:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:08.099 12:04:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.099 12:04:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:08.099 12:04:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:08.099 12:04:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:08.099 12:04:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:08.099 12:04:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:08.099 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:08.099 12:04:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:08.099 12:04:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:08.099 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:08.099 12:04:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:08.099 12:04:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:08.099 12:04:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.099 12:04:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:08.099 12:04:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.099 12:04:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:08.099 Found net devices under 0000:31:00.0: cvl_0_0 00:26:08.099 12:04:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.099 12:04:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:08.099 12:04:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.099 12:04:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:08.099 12:04:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.099 12:04:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:08.099 Found net devices under 0000:31:00.1: cvl_0_1 00:26:08.099 12:04:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.099 12:04:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:08.099 12:04:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:08.099 12:04:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:08.099 12:04:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.099 12:04:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.099 12:04:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.099 12:04:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:08.099 12:04:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.099 12:04:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.099 12:04:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:08.099 12:04:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.099 12:04:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.099 12:04:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:08.099 12:04:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:08.099 12:04:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.099 12:04:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.099 12:04:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.099 12:04:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.099 12:04:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:08.099 12:04:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.099 12:04:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.099 12:04:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.099 12:04:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:08.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:26:08.099 00:26:08.099 --- 10.0.0.2 ping statistics --- 00:26:08.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.099 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:26:08.099 12:04:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:26:08.099 00:26:08.099 --- 10.0.0.1 ping statistics --- 00:26:08.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.099 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:26:08.099 12:04:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.099 12:04:00 -- nvmf/common.sh@410 -- # return 0 00:26:08.099 12:04:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:08.099 12:04:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.099 12:04:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:08.099 12:04:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.099 12:04:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:08.099 12:04:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:08.099 12:04:00 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:08.099 12:04:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:08.099 12:04:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:08.099 12:04:00 -- common/autotest_common.sh@10 -- # set +x 00:26:08.099 12:04:00 -- nvmf/common.sh@469 -- # nvmfpid=2065387 00:26:08.099 12:04:00 -- nvmf/common.sh@470 -- # waitforlisten 2065387 00:26:08.099 12:04:00 -- common/autotest_common.sh@819 -- # '[' -z 2065387 ']' 00:26:08.099 12:04:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.099 12:04:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:08.099 12:04:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.099 12:04:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:08.099 12:04:00 -- common/autotest_common.sh@10 -- # set +x 00:26:08.099 12:04:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:08.099 [2024-06-10 12:04:00.983377] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:08.099 [2024-06-10 12:04:00.983470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.099 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.100 [2024-06-10 12:04:01.074239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:08.100 [2024-06-10 12:04:01.161413] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:08.100 [2024-06-10 12:04:01.161574] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.100 [2024-06-10 12:04:01.161584] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.100 [2024-06-10 12:04:01.161592] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.100 [2024-06-10 12:04:01.161774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.100 [2024-06-10 12:04:01.161940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.100 [2024-06-10 12:04:01.161941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.100 12:04:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:08.100 12:04:01 -- common/autotest_common.sh@852 -- # return 0 00:26:08.100 12:04:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:08.100 12:04:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:08.100 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.100 12:04:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.100 12:04:01 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:08.100 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.100 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.100 [2024-06-10 12:04:01.778480] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.100 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.100 12:04:01 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:08.100 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.100 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.100 Malloc0 00:26:08.100 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.100 12:04:01 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:08.100 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.100 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.100 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.100 12:04:01 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:08.100 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.100 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.100 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.100 12:04:01 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.100 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.100 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.100 [2024-06-10 12:04:01.854182] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.100 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.100 12:04:01 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:08.100 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.100 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.100 [2024-06-10 12:04:01.866167] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:08.360 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.360 12:04:01 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:08.361 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.361 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.361 Malloc1 00:26:08.361 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.361 12:04:01 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:08.361 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.361 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.361 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.361 12:04:01 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:08.361 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.361 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.361 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.361 12:04:01 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:08.361 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.361 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.361 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.361 12:04:01 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:08.361 12:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.361 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:08.361 12:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.361 12:04:01 -- host/multicontroller.sh@44 -- # bdevperf_pid=2065456 00:26:08.361 12:04:01 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:08.361 12:04:01 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:08.361 12:04:01 -- host/multicontroller.sh@47 -- # waitforlisten 2065456 /var/tmp/bdevperf.sock 00:26:08.361 12:04:01 -- common/autotest_common.sh@819 -- # '[' -z 2065456 ']' 00:26:08.361 12:04:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.361 12:04:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:08.361 12:04:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.361 12:04:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:08.361 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:09.301 12:04:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:09.301 12:04:02 -- common/autotest_common.sh@852 -- # return 0 00:26:09.301 12:04:02 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:09.301 12:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.301 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.301 NVMe0n1 00:26:09.301 12:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.301 12:04:02 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:09.301 12:04:02 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:09.301 12:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.301 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.301 12:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.301 1 00:26:09.301 12:04:02 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:09.301 12:04:02 -- common/autotest_common.sh@640 -- # local es=0 00:26:09.301 12:04:02 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:09.301 12:04:02 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:09.301 12:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.301 12:04:02 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:09.301 12:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.301 12:04:02 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:09.301 12:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.301 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.301 request: 00:26:09.301 { 00:26:09.301 "name": "NVMe0", 00:26:09.301 "trtype": "tcp", 00:26:09.301 "traddr": "10.0.0.2", 00:26:09.301 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:09.301 "hostaddr": "10.0.0.2", 00:26:09.301 "hostsvcid": "60000", 00:26:09.301 "adrfam": "ipv4", 00:26:09.301 "trsvcid": "4420", 00:26:09.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.301 "method": "bdev_nvme_attach_controller", 00:26:09.301 "req_id": 1 00:26:09.301 } 00:26:09.301 Got JSON-RPC error response 00:26:09.301 response: 00:26:09.301 { 00:26:09.301 "code": -114, 00:26:09.301 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:09.301 } 00:26:09.301 12:04:02 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:09.301 12:04:02 -- common/autotest_common.sh@643 -- # es=1 00:26:09.301 12:04:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:09.301 12:04:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:09.301 12:04:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:09.301 12:04:02 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:09.301 12:04:02 -- common/autotest_common.sh@640 -- # local es=0 00:26:09.301 12:04:02 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:09.302 12:04:02 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:09.302 12:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.302 12:04:02 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:09.302 12:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.302 12:04:02 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:09.302 12:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.302 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.302 request: 00:26:09.302 { 00:26:09.302 "name": "NVMe0", 00:26:09.302 "trtype": "tcp", 00:26:09.302 "traddr": "10.0.0.2", 00:26:09.302 "hostaddr": "10.0.0.2", 00:26:09.302 "hostsvcid": "60000", 00:26:09.302 "adrfam": "ipv4", 00:26:09.302 "trsvcid": "4420", 00:26:09.302 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:09.302 "method": "bdev_nvme_attach_controller", 00:26:09.302 "req_id": 1 00:26:09.302 } 00:26:09.302 Got JSON-RPC error response 00:26:09.302 response: 00:26:09.302 { 00:26:09.302 "code": -114, 00:26:09.302 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:09.302 } 00:26:09.302 12:04:02 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:09.302 12:04:02 -- common/autotest_common.sh@643 -- # es=1 00:26:09.302 12:04:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:09.302 12:04:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:09.302 12:04:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:09.302 12:04:02 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:09.302 12:04:02 -- common/autotest_common.sh@640 -- # local es=0 00:26:09.302 12:04:02 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:09.302 12:04:02 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:09.302 12:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.302 12:04:02 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:09.302 12:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.302 12:04:02 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:09.302 12:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.302 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.302 request: 00:26:09.302 { 00:26:09.302 "name": "NVMe0", 00:26:09.302 "trtype": "tcp", 00:26:09.302 "traddr": "10.0.0.2", 00:26:09.302 "hostaddr": "10.0.0.2", 00:26:09.302 "hostsvcid": "60000", 00:26:09.302 "adrfam": "ipv4", 00:26:09.302 "trsvcid": "4420", 00:26:09.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.302 "multipath": "disable", 00:26:09.302 "method": "bdev_nvme_attach_controller", 00:26:09.302 "req_id": 1 00:26:09.302 } 00:26:09.302 Got JSON-RPC error response 00:26:09.302 response: 00:26:09.302 { 00:26:09.302 "code": -114, 00:26:09.302 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:09.302 } 00:26:09.302 12:04:02 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:09.302 12:04:02 -- common/autotest_common.sh@643 -- # es=1 00:26:09.302 12:04:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:09.302 12:04:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:09.302 12:04:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:09.302 12:04:02 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:09.302 12:04:02 -- common/autotest_common.sh@640 -- # local es=0 00:26:09.302 12:04:02 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:09.302 12:04:02 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:09.302 12:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.302 12:04:02 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:09.302 12:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.302 12:04:02 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:09.302 12:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.302 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.302 request: 00:26:09.302 { 00:26:09.302 "name": "NVMe0", 00:26:09.302 "trtype": "tcp", 00:26:09.302 "traddr": "10.0.0.2", 00:26:09.302 "hostaddr": "10.0.0.2", 00:26:09.302 "hostsvcid": "60000", 00:26:09.302 "adrfam": "ipv4", 00:26:09.302 "trsvcid": "4420", 00:26:09.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.302 "multipath": "failover", 00:26:09.302 "method": "bdev_nvme_attach_controller", 00:26:09.302 "req_id": 1 00:26:09.302 } 00:26:09.302 Got JSON-RPC error response 00:26:09.302 response: 00:26:09.302 { 00:26:09.302 "code": -114, 00:26:09.302 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:09.302 } 00:26:09.302 12:04:02 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:09.302 12:04:02 -- common/autotest_common.sh@643 -- # es=1 00:26:09.302 12:04:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:09.302 12:04:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:09.302 12:04:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:09.302 12:04:02 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:09.302 12:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.302 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.562 00:26:09.562 12:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.562 12:04:03 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:09.562 12:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.562 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:26:09.562 12:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.562 12:04:03 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:09.562 12:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.562 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:26:09.823 00:26:09.823 12:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.823 12:04:03 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:09.823 12:04:03 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:09.823 12:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.823 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:26:09.823 12:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.823 12:04:03 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:09.823 12:04:03 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:10.764 0 00:26:10.764 12:04:04 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:10.764 12:04:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.764 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:26:10.764 12:04:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.764 12:04:04 -- host/multicontroller.sh@100 -- # killprocess 2065456 00:26:10.764 12:04:04 -- common/autotest_common.sh@926 -- # '[' -z 2065456 ']' 00:26:10.764 12:04:04 -- common/autotest_common.sh@930 -- # kill -0 2065456 00:26:10.764 12:04:04 -- common/autotest_common.sh@931 -- # uname 00:26:10.764 12:04:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:10.764 12:04:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2065456 00:26:11.025 12:04:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:11.025 12:04:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:11.025 12:04:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2065456' 00:26:11.025 killing process with pid 2065456 00:26:11.025 12:04:04 -- common/autotest_common.sh@945 -- # kill 2065456 00:26:11.025 12:04:04 -- common/autotest_common.sh@950 -- # wait 2065456 00:26:11.025 12:04:04 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:11.025 12:04:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.025 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:26:11.025 12:04:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.025 12:04:04 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:11.025 12:04:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.025 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:26:11.025 12:04:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.025 12:04:04 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:11.025 12:04:04 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:11.025 12:04:04 -- common/autotest_common.sh@1597 -- # read -r file 00:26:11.025 12:04:04 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:11.025 12:04:04 -- common/autotest_common.sh@1596 -- # sort -u 00:26:11.025 12:04:04 -- common/autotest_common.sh@1598 -- # cat 00:26:11.025 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:11.025 [2024-06-10 12:04:01.978992] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:11.025 [2024-06-10 12:04:01.979042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065456 ] 00:26:11.025 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.025 [2024-06-10 12:04:02.037430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.025 [2024-06-10 12:04:02.099779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.025 [2024-06-10 12:04:03.355792] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 20ed40f5-3b0d-42c5-b53e-8a131e2e3781 already exists 00:26:11.025 [2024-06-10 12:04:03.355821] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:20ed40f5-3b0d-42c5-b53e-8a131e2e3781 alias for bdev NVMe1n1 00:26:11.025 [2024-06-10 12:04:03.355831] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:11.025 Running I/O for 1 seconds... 00:26:11.025 00:26:11.025 Latency(us) 00:26:11.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.025 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:11.025 NVMe0n1 : 1.00 30130.67 117.70 0.00 0.00 4237.46 2293.76 14636.37 00:26:11.025 =================================================================================================================== 00:26:11.025 Total : 30130.67 117.70 0.00 0.00 4237.46 2293.76 14636.37 00:26:11.025 Received shutdown signal, test time was about 1.000000 seconds 00:26:11.025 00:26:11.025 Latency(us) 00:26:11.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.025 =================================================================================================================== 00:26:11.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.025 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:11.025 12:04:04 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:11.025 12:04:04 -- common/autotest_common.sh@1597 -- # read -r file 00:26:11.025 12:04:04 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:11.025 12:04:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:11.025 12:04:04 -- nvmf/common.sh@116 -- # sync 00:26:11.025 12:04:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:11.025 12:04:04 -- nvmf/common.sh@119 -- # set +e 00:26:11.025 12:04:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:11.025 12:04:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:11.025 rmmod nvme_tcp 00:26:11.025 rmmod nvme_fabrics 00:26:11.025 rmmod nvme_keyring 00:26:11.286 12:04:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:11.286 12:04:04 -- nvmf/common.sh@123 -- # set -e 00:26:11.286 12:04:04 -- nvmf/common.sh@124 -- # return 0 00:26:11.286 12:04:04 -- nvmf/common.sh@477 -- # '[' -n 2065387 ']' 00:26:11.286 12:04:04 -- nvmf/common.sh@478 -- # killprocess 2065387 00:26:11.286 12:04:04 -- common/autotest_common.sh@926 -- # '[' -z 2065387 ']' 00:26:11.286 12:04:04 -- common/autotest_common.sh@930 -- # kill -0 2065387 00:26:11.286 12:04:04 -- common/autotest_common.sh@931 -- # uname 00:26:11.286 12:04:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:11.286 12:04:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2065387 00:26:11.286 12:04:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:11.286 12:04:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:11.286 12:04:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2065387' 00:26:11.286 killing process with pid 2065387 00:26:11.286 12:04:04 -- common/autotest_common.sh@945 -- # kill 2065387 00:26:11.286 12:04:04 -- common/autotest_common.sh@950 -- # wait 2065387 00:26:11.286 12:04:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:11.286 12:04:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:11.286 12:04:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:11.286 12:04:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.286 12:04:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:11.286 12:04:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.286 12:04:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.286 12:04:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.828 12:04:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:13.828 00:26:13.828 real 0m13.592s 00:26:13.828 user 0m16.667s 00:26:13.828 sys 0m6.078s 00:26:13.828 12:04:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.828 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:26:13.828 ************************************ 00:26:13.828 END TEST nvmf_multicontroller 00:26:13.828 ************************************ 00:26:13.828 12:04:07 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:13.828 12:04:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:13.828 12:04:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:13.828 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:26:13.828 ************************************ 00:26:13.828 START TEST nvmf_aer 00:26:13.828 ************************************ 00:26:13.828 12:04:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:13.828 * Looking for test storage... 00:26:13.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.828 12:04:07 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.828 12:04:07 -- nvmf/common.sh@7 -- # uname -s 00:26:13.828 12:04:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.828 12:04:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.828 12:04:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.828 12:04:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.828 12:04:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.828 12:04:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.828 12:04:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.828 12:04:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.828 12:04:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.828 12:04:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.828 12:04:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:13.828 12:04:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:13.828 12:04:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.828 12:04:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.828 12:04:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.828 12:04:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.828 12:04:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.828 12:04:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.828 12:04:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.828 12:04:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.828 12:04:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.828 12:04:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.828 12:04:07 -- paths/export.sh@5 -- # export PATH 00:26:13.828 12:04:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.828 12:04:07 -- nvmf/common.sh@46 -- # : 0 00:26:13.828 12:04:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:13.828 12:04:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:13.828 12:04:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:13.828 12:04:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.828 12:04:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.828 12:04:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:13.828 12:04:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:13.828 12:04:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:13.828 12:04:07 -- host/aer.sh@11 -- # nvmftestinit 00:26:13.828 12:04:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:13.828 12:04:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.828 12:04:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:13.828 12:04:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:13.828 12:04:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:13.828 12:04:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.828 12:04:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.828 12:04:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.828 12:04:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:13.828 12:04:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:13.828 12:04:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:13.828 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:26:21.973 12:04:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:21.973 12:04:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:21.973 12:04:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:21.973 12:04:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:21.973 12:04:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:21.973 12:04:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:21.973 12:04:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:21.973 12:04:14 -- nvmf/common.sh@294 -- # net_devs=() 00:26:21.973 12:04:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:21.973 12:04:14 -- nvmf/common.sh@295 -- # e810=() 00:26:21.973 12:04:14 -- nvmf/common.sh@295 -- # local -ga e810 00:26:21.973 12:04:14 -- nvmf/common.sh@296 -- # x722=() 00:26:21.973 12:04:14 -- nvmf/common.sh@296 -- # local -ga x722 00:26:21.973 12:04:14 -- nvmf/common.sh@297 -- # mlx=() 00:26:21.974 12:04:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:21.974 12:04:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.974 12:04:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:21.974 12:04:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:21.974 12:04:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:21.974 12:04:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:21.974 12:04:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:21.974 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:21.974 12:04:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:21.974 12:04:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:21.974 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:21.974 12:04:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:21.974 12:04:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:21.974 12:04:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.974 12:04:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:21.974 12:04:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.974 12:04:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:21.974 Found net devices under 0000:31:00.0: cvl_0_0 00:26:21.974 12:04:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.974 12:04:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:21.974 12:04:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.974 12:04:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:21.974 12:04:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.974 12:04:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:21.974 Found net devices under 0000:31:00.1: cvl_0_1 00:26:21.974 12:04:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.974 12:04:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:21.974 12:04:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:21.974 12:04:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:21.974 12:04:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.974 12:04:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.974 12:04:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.974 12:04:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:21.974 12:04:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.974 12:04:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.974 12:04:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:21.974 12:04:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.974 12:04:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.974 12:04:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:21.974 12:04:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:21.974 12:04:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.974 12:04:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.974 12:04:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.974 12:04:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.974 12:04:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:21.974 12:04:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.974 12:04:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.974 12:04:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.974 12:04:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:21.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:26:21.974 00:26:21.974 --- 10.0.0.2 ping statistics --- 00:26:21.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.974 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:26:21.974 12:04:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:26:21.974 00:26:21.974 --- 10.0.0.1 ping statistics --- 00:26:21.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.974 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:26:21.974 12:04:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.974 12:04:14 -- nvmf/common.sh@410 -- # return 0 00:26:21.974 12:04:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:21.974 12:04:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.974 12:04:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:21.974 12:04:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.974 12:04:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:21.974 12:04:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:21.974 12:04:14 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:21.974 12:04:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:21.974 12:04:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:21.974 12:04:14 -- common/autotest_common.sh@10 -- # set +x 00:26:21.974 12:04:14 -- nvmf/common.sh@469 -- # nvmfpid=2070291 00:26:21.974 12:04:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:21.974 12:04:14 -- nvmf/common.sh@470 -- # waitforlisten 2070291 00:26:21.974 12:04:14 -- common/autotest_common.sh@819 -- # '[' -z 2070291 ']' 00:26:21.974 12:04:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.974 12:04:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:21.974 12:04:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.974 12:04:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:21.974 12:04:14 -- common/autotest_common.sh@10 -- # set +x 00:26:21.974 [2024-06-10 12:04:14.660988] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:21.974 [2024-06-10 12:04:14.661055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.974 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.974 [2024-06-10 12:04:14.731994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.974 [2024-06-10 12:04:14.805523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:21.974 [2024-06-10 12:04:14.805647] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.974 [2024-06-10 12:04:14.805655] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.974 [2024-06-10 12:04:14.805662] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.974 [2024-06-10 12:04:14.805799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.975 [2024-06-10 12:04:14.805916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.975 [2024-06-10 12:04:14.806072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.975 [2024-06-10 12:04:14.806073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:21.975 12:04:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:21.975 12:04:15 -- common/autotest_common.sh@852 -- # return 0 00:26:21.975 12:04:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:21.975 12:04:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:21.975 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:21.975 12:04:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.975 12:04:15 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.975 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.975 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:21.975 [2024-06-10 12:04:15.475405] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.975 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.975 12:04:15 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:21.975 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.975 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:21.975 Malloc0 00:26:21.975 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.975 12:04:15 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:21.975 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.975 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:21.975 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.975 12:04:15 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:21.975 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.975 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:21.975 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.975 12:04:15 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.975 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.975 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:21.975 [2024-06-10 12:04:15.534717] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.975 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.975 12:04:15 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:21.975 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.975 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:21.975 [2024-06-10 12:04:15.546543] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:21.975 [ 00:26:21.975 { 00:26:21.975 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:21.975 "subtype": "Discovery", 00:26:21.975 "listen_addresses": [], 00:26:21.975 "allow_any_host": true, 00:26:21.975 "hosts": [] 00:26:21.975 }, 00:26:21.975 { 00:26:21.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.975 "subtype": "NVMe", 00:26:21.975 "listen_addresses": [ 00:26:21.975 { 00:26:21.975 "transport": "TCP", 00:26:21.975 "trtype": "TCP", 00:26:21.975 "adrfam": "IPv4", 00:26:21.975 "traddr": "10.0.0.2", 00:26:21.975 "trsvcid": "4420" 00:26:21.975 } 00:26:21.975 ], 00:26:21.975 "allow_any_host": true, 00:26:21.975 "hosts": [], 00:26:21.975 "serial_number": "SPDK00000000000001", 00:26:21.975 "model_number": "SPDK bdev Controller", 00:26:21.975 "max_namespaces": 2, 00:26:21.975 "min_cntlid": 1, 00:26:21.975 "max_cntlid": 65519, 00:26:21.975 "namespaces": [ 00:26:21.975 { 00:26:21.975 "nsid": 1, 00:26:21.975 "bdev_name": "Malloc0", 00:26:21.975 "name": "Malloc0", 00:26:21.975 "nguid": "6E390E2917C24E55AC55AED587950298", 00:26:21.975 "uuid": "6e390e29-17c2-4e55-ac55-aed587950298" 00:26:21.975 } 00:26:21.975 ] 00:26:21.975 } 00:26:21.975 ] 00:26:21.975 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.975 12:04:15 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:21.975 12:04:15 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:21.975 12:04:15 -- host/aer.sh@33 -- # aerpid=2070548 00:26:21.975 12:04:15 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:21.975 12:04:15 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:21.975 12:04:15 -- common/autotest_common.sh@1244 -- # local i=0 00:26:21.975 12:04:15 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:21.975 12:04:15 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:26:21.975 12:04:15 -- common/autotest_common.sh@1247 -- # i=1 00:26:21.975 12:04:15 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:21.975 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.975 12:04:15 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:21.975 12:04:15 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:26:21.975 12:04:15 -- common/autotest_common.sh@1247 -- # i=2 00:26:21.975 12:04:15 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:22.236 12:04:15 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:22.236 12:04:15 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:26:22.236 12:04:15 -- common/autotest_common.sh@1247 -- # i=3 00:26:22.236 12:04:15 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:22.236 12:04:15 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:22.236 12:04:15 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:22.236 12:04:15 -- common/autotest_common.sh@1255 -- # return 0 00:26:22.236 12:04:15 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:22.236 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.236 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:22.236 Malloc1 00:26:22.236 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.236 12:04:15 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:22.236 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.236 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:22.236 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.236 12:04:15 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:22.236 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.236 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:22.236 Asynchronous Event Request test 00:26:22.236 Attaching to 10.0.0.2 00:26:22.236 Attached to 10.0.0.2 00:26:22.236 Registering asynchronous event callbacks... 00:26:22.236 Starting namespace attribute notice tests for all controllers... 00:26:22.236 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:22.236 aer_cb - Changed Namespace 00:26:22.236 Cleaning up... 00:26:22.236 [ 00:26:22.236 { 00:26:22.236 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:22.236 "subtype": "Discovery", 00:26:22.236 "listen_addresses": [], 00:26:22.236 "allow_any_host": true, 00:26:22.236 "hosts": [] 00:26:22.236 }, 00:26:22.236 { 00:26:22.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.237 "subtype": "NVMe", 00:26:22.237 "listen_addresses": [ 00:26:22.237 { 00:26:22.237 "transport": "TCP", 00:26:22.237 "trtype": "TCP", 00:26:22.237 "adrfam": "IPv4", 00:26:22.237 "traddr": "10.0.0.2", 00:26:22.237 "trsvcid": "4420" 00:26:22.237 } 00:26:22.237 ], 00:26:22.237 "allow_any_host": true, 00:26:22.237 "hosts": [], 00:26:22.237 "serial_number": "SPDK00000000000001", 00:26:22.237 "model_number": "SPDK bdev Controller", 00:26:22.237 "max_namespaces": 2, 00:26:22.237 "min_cntlid": 1, 00:26:22.237 "max_cntlid": 65519, 00:26:22.237 "namespaces": [ 00:26:22.237 { 00:26:22.237 "nsid": 1, 00:26:22.237 "bdev_name": "Malloc0", 00:26:22.237 "name": "Malloc0", 00:26:22.237 "nguid": "6E390E2917C24E55AC55AED587950298", 00:26:22.237 "uuid": "6e390e29-17c2-4e55-ac55-aed587950298" 00:26:22.237 }, 00:26:22.237 { 00:26:22.237 "nsid": 2, 00:26:22.237 "bdev_name": "Malloc1", 00:26:22.237 "name": "Malloc1", 00:26:22.237 "nguid": "878F5970088546ABACB1603A59225995", 00:26:22.237 "uuid": "878f5970-0885-46ab-acb1-603a59225995" 00:26:22.237 } 00:26:22.237 ] 00:26:22.237 } 00:26:22.237 ] 00:26:22.237 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.237 12:04:15 -- host/aer.sh@43 -- # wait 2070548 00:26:22.237 12:04:15 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:22.237 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.237 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:22.237 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.237 12:04:15 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:22.237 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.237 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:22.237 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.237 12:04:15 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.237 12:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.237 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:26:22.237 12:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.237 12:04:15 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:22.237 12:04:15 -- host/aer.sh@51 -- # nvmftestfini 00:26:22.237 12:04:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:22.237 12:04:15 -- nvmf/common.sh@116 -- # sync 00:26:22.237 12:04:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:22.237 12:04:15 -- nvmf/common.sh@119 -- # set +e 00:26:22.237 12:04:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:22.237 12:04:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:22.237 rmmod nvme_tcp 00:26:22.498 rmmod nvme_fabrics 00:26:22.498 rmmod nvme_keyring 00:26:22.498 12:04:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:22.498 12:04:16 -- nvmf/common.sh@123 -- # set -e 00:26:22.498 12:04:16 -- nvmf/common.sh@124 -- # return 0 00:26:22.498 12:04:16 -- nvmf/common.sh@477 -- # '[' -n 2070291 ']' 00:26:22.498 12:04:16 -- nvmf/common.sh@478 -- # killprocess 2070291 00:26:22.498 12:04:16 -- common/autotest_common.sh@926 -- # '[' -z 2070291 ']' 00:26:22.498 12:04:16 -- common/autotest_common.sh@930 -- # kill -0 2070291 00:26:22.498 12:04:16 -- common/autotest_common.sh@931 -- # uname 00:26:22.498 12:04:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:22.498 12:04:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2070291 00:26:22.498 12:04:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:22.498 12:04:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:22.498 12:04:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2070291' 00:26:22.498 killing process with pid 2070291 00:26:22.498 12:04:16 -- common/autotest_common.sh@945 -- # kill 2070291 00:26:22.498 [2024-06-10 12:04:16.122706] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:22.498 12:04:16 -- common/autotest_common.sh@950 -- # wait 2070291 00:26:22.498 12:04:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:22.498 12:04:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:22.498 12:04:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:22.498 12:04:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.498 12:04:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:22.498 12:04:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.498 12:04:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.498 12:04:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.047 12:04:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:25.047 00:26:25.047 real 0m11.179s 00:26:25.047 user 0m7.928s 00:26:25.047 sys 0m5.823s 00:26:25.047 12:04:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.048 12:04:18 -- common/autotest_common.sh@10 -- # set +x 00:26:25.048 ************************************ 00:26:25.048 END TEST nvmf_aer 00:26:25.048 ************************************ 00:26:25.048 12:04:18 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:25.048 12:04:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:25.048 12:04:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.048 12:04:18 -- common/autotest_common.sh@10 -- # set +x 00:26:25.048 ************************************ 00:26:25.048 START TEST nvmf_async_init 00:26:25.048 ************************************ 00:26:25.048 12:04:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:25.048 * Looking for test storage... 00:26:25.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.048 12:04:18 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.048 12:04:18 -- nvmf/common.sh@7 -- # uname -s 00:26:25.048 12:04:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.048 12:04:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.048 12:04:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.048 12:04:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.048 12:04:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.048 12:04:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.048 12:04:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.048 12:04:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.048 12:04:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.048 12:04:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.048 12:04:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:25.048 12:04:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:25.048 12:04:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.048 12:04:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.048 12:04:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.048 12:04:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.048 12:04:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.048 12:04:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.048 12:04:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.048 12:04:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.048 12:04:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.048 12:04:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.048 12:04:18 -- paths/export.sh@5 -- # export PATH 00:26:25.048 12:04:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.048 12:04:18 -- nvmf/common.sh@46 -- # : 0 00:26:25.048 12:04:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:25.048 12:04:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:25.048 12:04:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:25.048 12:04:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.048 12:04:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.048 12:04:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:25.048 12:04:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:25.048 12:04:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:25.048 12:04:18 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:25.048 12:04:18 -- host/async_init.sh@14 -- # null_block_size=512 00:26:25.048 12:04:18 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:25.048 12:04:18 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:25.048 12:04:18 -- host/async_init.sh@20 -- # uuidgen 00:26:25.048 12:04:18 -- host/async_init.sh@20 -- # tr -d - 00:26:25.048 12:04:18 -- host/async_init.sh@20 -- # nguid=b439dbcaf6a1410eb2d5309d8436fdf5 00:26:25.048 12:04:18 -- host/async_init.sh@22 -- # nvmftestinit 00:26:25.048 12:04:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:25.048 12:04:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.048 12:04:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:25.048 12:04:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:25.048 12:04:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:25.048 12:04:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.048 12:04:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.048 12:04:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.048 12:04:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:25.048 12:04:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:25.048 12:04:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:25.048 12:04:18 -- common/autotest_common.sh@10 -- # set +x 00:26:31.639 12:04:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:31.639 12:04:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:31.639 12:04:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:31.639 12:04:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:31.639 12:04:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:31.639 12:04:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:31.639 12:04:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:31.639 12:04:25 -- nvmf/common.sh@294 -- # net_devs=() 00:26:31.639 12:04:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:31.639 12:04:25 -- nvmf/common.sh@295 -- # e810=() 00:26:31.639 12:04:25 -- nvmf/common.sh@295 -- # local -ga e810 00:26:31.639 12:04:25 -- nvmf/common.sh@296 -- # x722=() 00:26:31.639 12:04:25 -- nvmf/common.sh@296 -- # local -ga x722 00:26:31.639 12:04:25 -- nvmf/common.sh@297 -- # mlx=() 00:26:31.639 12:04:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:31.639 12:04:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.639 12:04:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:31.639 12:04:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:31.639 12:04:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:31.639 12:04:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:31.639 12:04:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:31.639 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:31.639 12:04:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:31.639 12:04:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:31.639 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:31.639 12:04:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:31.639 12:04:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:31.639 12:04:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.639 12:04:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:31.639 12:04:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.639 12:04:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:31.639 Found net devices under 0000:31:00.0: cvl_0_0 00:26:31.639 12:04:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.639 12:04:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:31.639 12:04:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.639 12:04:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:31.639 12:04:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.639 12:04:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:31.639 Found net devices under 0000:31:00.1: cvl_0_1 00:26:31.639 12:04:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.639 12:04:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:31.639 12:04:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:31.639 12:04:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:31.639 12:04:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:31.639 12:04:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.639 12:04:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.639 12:04:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.639 12:04:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:31.639 12:04:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.639 12:04:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.639 12:04:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:31.639 12:04:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.639 12:04:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.639 12:04:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:31.639 12:04:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:31.639 12:04:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.639 12:04:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.901 12:04:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.901 12:04:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.901 12:04:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:31.901 12:04:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.901 12:04:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.901 12:04:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.901 12:04:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:31.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:26:31.901 00:26:31.901 --- 10.0.0.2 ping statistics --- 00:26:31.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.901 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:26:31.901 12:04:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:26:31.901 00:26:31.901 --- 10.0.0.1 ping statistics --- 00:26:31.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.901 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:26:31.901 12:04:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.901 12:04:25 -- nvmf/common.sh@410 -- # return 0 00:26:31.901 12:04:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:31.901 12:04:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.901 12:04:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:31.901 12:04:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:31.901 12:04:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.901 12:04:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:31.901 12:04:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:32.162 12:04:25 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:32.162 12:04:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:32.162 12:04:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:32.162 12:04:25 -- common/autotest_common.sh@10 -- # set +x 00:26:32.162 12:04:25 -- nvmf/common.sh@469 -- # nvmfpid=2074879 00:26:32.162 12:04:25 -- nvmf/common.sh@470 -- # waitforlisten 2074879 00:26:32.162 12:04:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:32.162 12:04:25 -- common/autotest_common.sh@819 -- # '[' -z 2074879 ']' 00:26:32.162 12:04:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.162 12:04:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:32.162 12:04:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.162 12:04:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:32.162 12:04:25 -- common/autotest_common.sh@10 -- # set +x 00:26:32.162 [2024-06-10 12:04:25.749801] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:32.162 [2024-06-10 12:04:25.749868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.162 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.162 [2024-06-10 12:04:25.823043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.162 [2024-06-10 12:04:25.896208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:32.162 [2024-06-10 12:04:25.896341] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.162 [2024-06-10 12:04:25.896350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.162 [2024-06-10 12:04:25.896357] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.162 [2024-06-10 12:04:25.896377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.103 12:04:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:33.103 12:04:26 -- common/autotest_common.sh@852 -- # return 0 00:26:33.103 12:04:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:33.103 12:04:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:33.103 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.103 12:04:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.103 12:04:26 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:33.103 12:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.103 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.103 [2024-06-10 12:04:26.567230] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.103 12:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.103 12:04:26 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:33.103 12:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.103 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.103 null0 00:26:33.103 12:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.103 12:04:26 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:33.103 12:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.103 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.103 12:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.103 12:04:26 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:33.103 12:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.103 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.103 12:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.103 12:04:26 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b439dbcaf6a1410eb2d5309d8436fdf5 00:26:33.103 12:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.103 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.103 12:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.103 12:04:26 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:33.103 12:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.103 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.103 [2024-06-10 12:04:26.623476] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.103 12:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.103 12:04:26 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:33.103 12:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.103 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.103 nvme0n1 00:26:33.103 12:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.103 12:04:26 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:33.103 12:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.103 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.103 [ 00:26:33.103 { 00:26:33.103 "name": "nvme0n1", 00:26:33.103 "aliases": [ 00:26:33.103 "b439dbca-f6a1-410e-b2d5-309d8436fdf5" 00:26:33.103 ], 00:26:33.103 "product_name": "NVMe disk", 00:26:33.103 "block_size": 512, 00:26:33.103 "num_blocks": 2097152, 00:26:33.103 "uuid": "b439dbca-f6a1-410e-b2d5-309d8436fdf5", 00:26:33.395 "assigned_rate_limits": { 00:26:33.395 "rw_ios_per_sec": 0, 00:26:33.395 "rw_mbytes_per_sec": 0, 00:26:33.395 "r_mbytes_per_sec": 0, 00:26:33.395 "w_mbytes_per_sec": 0 00:26:33.395 }, 00:26:33.395 "claimed": false, 00:26:33.395 "zoned": false, 00:26:33.395 "supported_io_types": { 00:26:33.395 "read": true, 00:26:33.395 "write": true, 00:26:33.395 "unmap": false, 00:26:33.395 "write_zeroes": true, 00:26:33.395 "flush": true, 00:26:33.395 "reset": true, 00:26:33.395 "compare": true, 00:26:33.395 "compare_and_write": true, 00:26:33.395 "abort": true, 00:26:33.395 "nvme_admin": true, 00:26:33.395 "nvme_io": true 00:26:33.395 }, 00:26:33.395 "driver_specific": { 00:26:33.395 "nvme": [ 00:26:33.395 { 00:26:33.395 "trid": { 00:26:33.395 "trtype": "TCP", 00:26:33.395 "adrfam": "IPv4", 00:26:33.395 "traddr": "10.0.0.2", 00:26:33.395 "trsvcid": "4420", 00:26:33.395 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:33.395 }, 00:26:33.395 "ctrlr_data": { 00:26:33.395 "cntlid": 1, 00:26:33.395 "vendor_id": "0x8086", 00:26:33.395 "model_number": "SPDK bdev Controller", 00:26:33.395 "serial_number": "00000000000000000000", 00:26:33.395 "firmware_revision": "24.01.1", 00:26:33.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:33.395 "oacs": { 00:26:33.395 "security": 0, 00:26:33.395 "format": 0, 00:26:33.395 "firmware": 0, 00:26:33.395 "ns_manage": 0 00:26:33.395 }, 00:26:33.395 "multi_ctrlr": true, 00:26:33.395 "ana_reporting": false 00:26:33.395 }, 00:26:33.395 "vs": { 00:26:33.395 "nvme_version": "1.3" 00:26:33.395 }, 00:26:33.395 "ns_data": { 00:26:33.395 "id": 1, 00:26:33.395 "can_share": true 00:26:33.395 } 00:26:33.395 } 00:26:33.395 ], 00:26:33.395 "mp_policy": "active_passive" 00:26:33.395 } 00:26:33.395 } 00:26:33.395 ] 00:26:33.395 12:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.395 12:04:26 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:33.395 12:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.395 12:04:26 -- common/autotest_common.sh@10 -- # set +x 00:26:33.395 [2024-06-10 12:04:26.888050] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.395 [2024-06-10 12:04:26.888110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd8a90 (9): Bad file descriptor 00:26:33.395 [2024-06-10 12:04:27.020334] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:33.395 12:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.395 12:04:27 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:33.395 12:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.395 12:04:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.395 [ 00:26:33.395 { 00:26:33.395 "name": "nvme0n1", 00:26:33.395 "aliases": [ 00:26:33.395 "b439dbca-f6a1-410e-b2d5-309d8436fdf5" 00:26:33.395 ], 00:26:33.395 "product_name": "NVMe disk", 00:26:33.395 "block_size": 512, 00:26:33.395 "num_blocks": 2097152, 00:26:33.395 "uuid": "b439dbca-f6a1-410e-b2d5-309d8436fdf5", 00:26:33.395 "assigned_rate_limits": { 00:26:33.395 "rw_ios_per_sec": 0, 00:26:33.395 "rw_mbytes_per_sec": 0, 00:26:33.395 "r_mbytes_per_sec": 0, 00:26:33.395 "w_mbytes_per_sec": 0 00:26:33.395 }, 00:26:33.395 "claimed": false, 00:26:33.395 "zoned": false, 00:26:33.395 "supported_io_types": { 00:26:33.395 "read": true, 00:26:33.395 "write": true, 00:26:33.395 "unmap": false, 00:26:33.395 "write_zeroes": true, 00:26:33.395 "flush": true, 00:26:33.395 "reset": true, 00:26:33.395 "compare": true, 00:26:33.395 "compare_and_write": true, 00:26:33.395 "abort": true, 00:26:33.395 "nvme_admin": true, 00:26:33.395 "nvme_io": true 00:26:33.395 }, 00:26:33.395 "driver_specific": { 00:26:33.395 "nvme": [ 00:26:33.395 { 00:26:33.395 "trid": { 00:26:33.395 "trtype": "TCP", 00:26:33.395 "adrfam": "IPv4", 00:26:33.395 "traddr": "10.0.0.2", 00:26:33.395 "trsvcid": "4420", 00:26:33.395 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:33.395 }, 00:26:33.395 "ctrlr_data": { 00:26:33.395 "cntlid": 2, 00:26:33.395 "vendor_id": "0x8086", 00:26:33.395 "model_number": "SPDK bdev Controller", 00:26:33.395 "serial_number": "00000000000000000000", 00:26:33.395 "firmware_revision": "24.01.1", 00:26:33.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:33.395 "oacs": { 00:26:33.395 "security": 0, 00:26:33.395 "format": 0, 00:26:33.395 "firmware": 0, 00:26:33.395 "ns_manage": 0 00:26:33.395 }, 00:26:33.395 "multi_ctrlr": true, 00:26:33.395 "ana_reporting": false 00:26:33.395 }, 00:26:33.395 "vs": { 00:26:33.395 "nvme_version": "1.3" 00:26:33.395 }, 00:26:33.395 "ns_data": { 00:26:33.395 "id": 1, 00:26:33.395 "can_share": true 00:26:33.395 } 00:26:33.395 } 00:26:33.395 ], 00:26:33.395 "mp_policy": "active_passive" 00:26:33.395 } 00:26:33.395 } 00:26:33.395 ] 00:26:33.395 12:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.395 12:04:27 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.395 12:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.395 12:04:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.395 12:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.395 12:04:27 -- host/async_init.sh@53 -- # mktemp 00:26:33.395 12:04:27 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.EOBe5RgGwu 00:26:33.395 12:04:27 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:33.395 12:04:27 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.EOBe5RgGwu 00:26:33.395 12:04:27 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:33.395 12:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.395 12:04:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.395 12:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.395 12:04:27 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:33.395 12:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.395 12:04:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.395 [2024-06-10 12:04:27.084647] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:33.395 [2024-06-10 12:04:27.084760] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:33.395 12:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.395 12:04:27 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EOBe5RgGwu 00:26:33.395 12:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.395 12:04:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.395 12:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.395 12:04:27 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EOBe5RgGwu 00:26:33.395 12:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.395 12:04:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.395 [2024-06-10 12:04:27.108707] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:33.680 nvme0n1 00:26:33.680 12:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.680 12:04:27 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:33.680 12:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.680 12:04:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.680 [ 00:26:33.680 { 00:26:33.680 "name": "nvme0n1", 00:26:33.680 "aliases": [ 00:26:33.680 "b439dbca-f6a1-410e-b2d5-309d8436fdf5" 00:26:33.680 ], 00:26:33.680 "product_name": "NVMe disk", 00:26:33.680 "block_size": 512, 00:26:33.680 "num_blocks": 2097152, 00:26:33.680 "uuid": "b439dbca-f6a1-410e-b2d5-309d8436fdf5", 00:26:33.680 "assigned_rate_limits": { 00:26:33.680 "rw_ios_per_sec": 0, 00:26:33.680 "rw_mbytes_per_sec": 0, 00:26:33.680 "r_mbytes_per_sec": 0, 00:26:33.680 "w_mbytes_per_sec": 0 00:26:33.680 }, 00:26:33.680 "claimed": false, 00:26:33.680 "zoned": false, 00:26:33.680 "supported_io_types": { 00:26:33.680 "read": true, 00:26:33.680 "write": true, 00:26:33.681 "unmap": false, 00:26:33.681 "write_zeroes": true, 00:26:33.681 "flush": true, 00:26:33.681 "reset": true, 00:26:33.681 "compare": true, 00:26:33.681 "compare_and_write": true, 00:26:33.681 "abort": true, 00:26:33.681 "nvme_admin": true, 00:26:33.681 "nvme_io": true 00:26:33.681 }, 00:26:33.681 "driver_specific": { 00:26:33.681 "nvme": [ 00:26:33.681 { 00:26:33.681 "trid": { 00:26:33.681 "trtype": "TCP", 00:26:33.681 "adrfam": "IPv4", 00:26:33.681 "traddr": "10.0.0.2", 00:26:33.681 "trsvcid": "4421", 00:26:33.681 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:33.681 }, 00:26:33.681 "ctrlr_data": { 00:26:33.681 "cntlid": 3, 00:26:33.681 "vendor_id": "0x8086", 00:26:33.681 "model_number": "SPDK bdev Controller", 00:26:33.681 "serial_number": "00000000000000000000", 00:26:33.681 "firmware_revision": "24.01.1", 00:26:33.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:33.681 "oacs": { 00:26:33.681 "security": 0, 00:26:33.681 "format": 0, 00:26:33.681 "firmware": 0, 00:26:33.681 "ns_manage": 0 00:26:33.681 }, 00:26:33.681 "multi_ctrlr": true, 00:26:33.681 "ana_reporting": false 00:26:33.681 }, 00:26:33.681 "vs": { 00:26:33.681 "nvme_version": "1.3" 00:26:33.681 }, 00:26:33.681 "ns_data": { 00:26:33.681 "id": 1, 00:26:33.681 "can_share": true 00:26:33.681 } 00:26:33.681 } 00:26:33.681 ], 00:26:33.681 "mp_policy": "active_passive" 00:26:33.681 } 00:26:33.681 } 00:26:33.681 ] 00:26:33.681 12:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.681 12:04:27 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.681 12:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.681 12:04:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.681 12:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.681 12:04:27 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.EOBe5RgGwu 00:26:33.681 12:04:27 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:33.681 12:04:27 -- host/async_init.sh@78 -- # nvmftestfini 00:26:33.681 12:04:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:33.681 12:04:27 -- nvmf/common.sh@116 -- # sync 00:26:33.681 12:04:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:33.681 12:04:27 -- nvmf/common.sh@119 -- # set +e 00:26:33.681 12:04:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:33.681 12:04:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:33.681 rmmod nvme_tcp 00:26:33.681 rmmod nvme_fabrics 00:26:33.681 rmmod nvme_keyring 00:26:33.681 12:04:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:33.681 12:04:27 -- nvmf/common.sh@123 -- # set -e 00:26:33.681 12:04:27 -- nvmf/common.sh@124 -- # return 0 00:26:33.681 12:04:27 -- nvmf/common.sh@477 -- # '[' -n 2074879 ']' 00:26:33.681 12:04:27 -- nvmf/common.sh@478 -- # killprocess 2074879 00:26:33.681 12:04:27 -- common/autotest_common.sh@926 -- # '[' -z 2074879 ']' 00:26:33.681 12:04:27 -- common/autotest_common.sh@930 -- # kill -0 2074879 00:26:33.681 12:04:27 -- common/autotest_common.sh@931 -- # uname 00:26:33.681 12:04:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:33.681 12:04:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2074879 00:26:33.681 12:04:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:33.681 12:04:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:33.681 12:04:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2074879' 00:26:33.681 killing process with pid 2074879 00:26:33.681 12:04:27 -- common/autotest_common.sh@945 -- # kill 2074879 00:26:33.681 12:04:27 -- common/autotest_common.sh@950 -- # wait 2074879 00:26:33.942 12:04:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:33.942 12:04:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:33.942 12:04:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:33.942 12:04:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:33.942 12:04:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:33.942 12:04:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.942 12:04:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.942 12:04:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.857 12:04:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:35.857 00:26:35.857 real 0m11.182s 00:26:35.857 user 0m3.954s 00:26:35.857 sys 0m5.649s 00:26:35.857 12:04:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.857 12:04:29 -- common/autotest_common.sh@10 -- # set +x 00:26:35.857 ************************************ 00:26:35.857 END TEST nvmf_async_init 00:26:35.857 ************************************ 00:26:35.857 12:04:29 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:35.857 12:04:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:35.857 12:04:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:35.857 12:04:29 -- common/autotest_common.sh@10 -- # set +x 00:26:35.857 ************************************ 00:26:35.857 START TEST dma 00:26:35.857 ************************************ 00:26:35.857 12:04:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:36.118 * Looking for test storage... 00:26:36.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:36.118 12:04:29 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.118 12:04:29 -- nvmf/common.sh@7 -- # uname -s 00:26:36.118 12:04:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.118 12:04:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.118 12:04:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.118 12:04:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.118 12:04:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.118 12:04:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.118 12:04:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.118 12:04:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.118 12:04:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.118 12:04:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.118 12:04:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:36.118 12:04:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:36.118 12:04:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.118 12:04:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.118 12:04:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:36.118 12:04:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.118 12:04:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.118 12:04:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.118 12:04:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.118 12:04:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.118 12:04:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.119 12:04:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.119 12:04:29 -- paths/export.sh@5 -- # export PATH 00:26:36.119 12:04:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.119 12:04:29 -- nvmf/common.sh@46 -- # : 0 00:26:36.119 12:04:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:36.119 12:04:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:36.119 12:04:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:36.119 12:04:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.119 12:04:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.119 12:04:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:36.119 12:04:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:36.119 12:04:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:36.119 12:04:29 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:36.119 12:04:29 -- host/dma.sh@13 -- # exit 0 00:26:36.119 00:26:36.119 real 0m0.126s 00:26:36.119 user 0m0.049s 00:26:36.119 sys 0m0.085s 00:26:36.119 12:04:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.119 12:04:29 -- common/autotest_common.sh@10 -- # set +x 00:26:36.119 ************************************ 00:26:36.119 END TEST dma 00:26:36.119 ************************************ 00:26:36.119 12:04:29 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:36.119 12:04:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:36.119 12:04:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:36.119 12:04:29 -- common/autotest_common.sh@10 -- # set +x 00:26:36.119 ************************************ 00:26:36.119 START TEST nvmf_identify 00:26:36.119 ************************************ 00:26:36.119 12:04:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:36.119 * Looking for test storage... 00:26:36.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:36.119 12:04:29 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.119 12:04:29 -- nvmf/common.sh@7 -- # uname -s 00:26:36.119 12:04:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.119 12:04:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.119 12:04:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.119 12:04:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.119 12:04:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.119 12:04:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.119 12:04:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.119 12:04:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.119 12:04:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.119 12:04:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.119 12:04:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:36.119 12:04:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:36.119 12:04:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.119 12:04:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.119 12:04:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:36.381 12:04:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.381 12:04:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.381 12:04:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.381 12:04:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.381 12:04:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.381 12:04:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.381 12:04:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.381 12:04:29 -- paths/export.sh@5 -- # export PATH 00:26:36.381 12:04:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.381 12:04:29 -- nvmf/common.sh@46 -- # : 0 00:26:36.381 12:04:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:36.381 12:04:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:36.381 12:04:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:36.381 12:04:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.381 12:04:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.381 12:04:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:36.381 12:04:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:36.381 12:04:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:36.381 12:04:29 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:36.381 12:04:29 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:36.381 12:04:29 -- host/identify.sh@14 -- # nvmftestinit 00:26:36.381 12:04:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:36.381 12:04:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.381 12:04:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:36.381 12:04:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:36.381 12:04:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:36.381 12:04:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.381 12:04:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.381 12:04:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.381 12:04:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:36.381 12:04:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:36.381 12:04:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:36.381 12:04:29 -- common/autotest_common.sh@10 -- # set +x 00:26:44.532 12:04:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:44.532 12:04:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:44.532 12:04:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:44.532 12:04:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:44.532 12:04:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:44.532 12:04:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:44.532 12:04:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:44.532 12:04:36 -- nvmf/common.sh@294 -- # net_devs=() 00:26:44.532 12:04:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:44.532 12:04:36 -- nvmf/common.sh@295 -- # e810=() 00:26:44.532 12:04:36 -- nvmf/common.sh@295 -- # local -ga e810 00:26:44.532 12:04:36 -- nvmf/common.sh@296 -- # x722=() 00:26:44.532 12:04:36 -- nvmf/common.sh@296 -- # local -ga x722 00:26:44.532 12:04:36 -- nvmf/common.sh@297 -- # mlx=() 00:26:44.532 12:04:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:44.532 12:04:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.532 12:04:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:44.532 12:04:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:44.532 12:04:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:44.532 12:04:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:44.532 12:04:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:44.532 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:44.532 12:04:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:44.532 12:04:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:44.532 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:44.532 12:04:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.532 12:04:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.533 12:04:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:44.533 12:04:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:44.533 12:04:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:44.533 12:04:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:44.533 12:04:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:44.533 12:04:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.533 12:04:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:44.533 12:04:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.533 12:04:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:44.533 Found net devices under 0000:31:00.0: cvl_0_0 00:26:44.533 12:04:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.533 12:04:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:44.533 12:04:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.533 12:04:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:44.533 12:04:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.533 12:04:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:44.533 Found net devices under 0000:31:00.1: cvl_0_1 00:26:44.533 12:04:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.533 12:04:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:44.533 12:04:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:44.533 12:04:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:44.533 12:04:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:44.533 12:04:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:44.533 12:04:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.533 12:04:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.533 12:04:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.533 12:04:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:44.533 12:04:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.533 12:04:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.533 12:04:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:44.533 12:04:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.533 12:04:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.533 12:04:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:44.533 12:04:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:44.533 12:04:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.533 12:04:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.533 12:04:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.533 12:04:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.533 12:04:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:44.533 12:04:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.533 12:04:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.533 12:04:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.533 12:04:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:44.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:26:44.533 00:26:44.533 --- 10.0.0.2 ping statistics --- 00:26:44.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.533 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:26:44.533 12:04:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:26:44.533 00:26:44.533 --- 10.0.0.1 ping statistics --- 00:26:44.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.533 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:44.533 12:04:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.533 12:04:37 -- nvmf/common.sh@410 -- # return 0 00:26:44.533 12:04:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:44.533 12:04:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.533 12:04:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:44.533 12:04:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:44.533 12:04:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.533 12:04:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:44.533 12:04:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:44.533 12:04:37 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:44.533 12:04:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:44.533 12:04:37 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 12:04:37 -- host/identify.sh@19 -- # nvmfpid=2079442 00:26:44.533 12:04:37 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:44.533 12:04:37 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:44.533 12:04:37 -- host/identify.sh@23 -- # waitforlisten 2079442 00:26:44.533 12:04:37 -- common/autotest_common.sh@819 -- # '[' -z 2079442 ']' 00:26:44.533 12:04:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.533 12:04:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:44.533 12:04:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.533 12:04:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:44.533 12:04:37 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 [2024-06-10 12:04:37.225213] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:44.533 [2024-06-10 12:04:37.225280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.533 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.533 [2024-06-10 12:04:37.297422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.533 [2024-06-10 12:04:37.371528] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:44.533 [2024-06-10 12:04:37.371659] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.533 [2024-06-10 12:04:37.371669] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.533 [2024-06-10 12:04:37.371677] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.533 [2024-06-10 12:04:37.371817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.533 [2024-06-10 12:04:37.371916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.533 [2024-06-10 12:04:37.372075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.533 [2024-06-10 12:04:37.372076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.533 12:04:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:44.533 12:04:38 -- common/autotest_common.sh@852 -- # return 0 00:26:44.533 12:04:38 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.533 12:04:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.533 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 [2024-06-10 12:04:38.008234] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.533 12:04:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.533 12:04:38 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:44.533 12:04:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:44.533 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 12:04:38 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:44.533 12:04:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.533 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 Malloc0 00:26:44.533 12:04:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.533 12:04:38 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:44.533 12:04:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.533 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 12:04:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.533 12:04:38 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:44.533 12:04:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.533 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 12:04:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.533 12:04:38 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.533 12:04:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.533 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 [2024-06-10 12:04:38.107737] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.533 12:04:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.533 12:04:38 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:44.533 12:04:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.533 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 12:04:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.533 12:04:38 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:44.533 12:04:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.533 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 [2024-06-10 12:04:38.131594] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:44.533 [ 00:26:44.533 { 00:26:44.533 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:44.533 "subtype": "Discovery", 00:26:44.533 "listen_addresses": [ 00:26:44.533 { 00:26:44.533 "transport": "TCP", 00:26:44.533 "trtype": "TCP", 00:26:44.533 "adrfam": "IPv4", 00:26:44.533 "traddr": "10.0.0.2", 00:26:44.533 "trsvcid": "4420" 00:26:44.533 } 00:26:44.533 ], 00:26:44.533 "allow_any_host": true, 00:26:44.533 "hosts": [] 00:26:44.533 }, 00:26:44.533 { 00:26:44.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.533 "subtype": "NVMe", 00:26:44.533 "listen_addresses": [ 00:26:44.533 { 00:26:44.533 "transport": "TCP", 00:26:44.533 "trtype": "TCP", 00:26:44.533 "adrfam": "IPv4", 00:26:44.533 "traddr": "10.0.0.2", 00:26:44.533 "trsvcid": "4420" 00:26:44.533 } 00:26:44.533 ], 00:26:44.533 "allow_any_host": true, 00:26:44.533 "hosts": [], 00:26:44.533 "serial_number": "SPDK00000000000001", 00:26:44.534 "model_number": "SPDK bdev Controller", 00:26:44.534 "max_namespaces": 32, 00:26:44.534 "min_cntlid": 1, 00:26:44.534 "max_cntlid": 65519, 00:26:44.534 "namespaces": [ 00:26:44.534 { 00:26:44.534 "nsid": 1, 00:26:44.534 "bdev_name": "Malloc0", 00:26:44.534 "name": "Malloc0", 00:26:44.534 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:44.534 "eui64": "ABCDEF0123456789", 00:26:44.534 "uuid": "d912572c-febe-488d-b9d6-5cf1c4a20024" 00:26:44.534 } 00:26:44.534 ] 00:26:44.534 } 00:26:44.534 ] 00:26:44.534 12:04:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.534 12:04:38 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:44.534 [2024-06-10 12:04:38.168943] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:44.534 [2024-06-10 12:04:38.169008] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079705 ] 00:26:44.534 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.534 [2024-06-10 12:04:38.201884] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:44.534 [2024-06-10 12:04:38.201928] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:44.534 [2024-06-10 12:04:38.201933] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:44.534 [2024-06-10 12:04:38.201944] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:44.534 [2024-06-10 12:04:38.201951] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:44.534 [2024-06-10 12:04:38.205276] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:44.534 [2024-06-10 12:04:38.205309] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x23849e0 0 00:26:44.534 [2024-06-10 12:04:38.213251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:44.534 [2024-06-10 12:04:38.213262] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:44.534 [2024-06-10 12:04:38.213267] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:44.534 [2024-06-10 12:04:38.213270] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:44.534 [2024-06-10 12:04:38.213306] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.213312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.213316] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.534 [2024-06-10 12:04:38.213329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:44.534 [2024-06-10 12:04:38.213344] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.534 [2024-06-10 12:04:38.221255] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.534 [2024-06-10 12:04:38.221264] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.534 [2024-06-10 12:04:38.221268] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.221272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ec730) on tqpair=0x23849e0 00:26:44.534 [2024-06-10 12:04:38.221284] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:44.534 [2024-06-10 12:04:38.221290] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:44.534 [2024-06-10 12:04:38.221295] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:44.534 [2024-06-10 12:04:38.221309] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.221313] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.221317] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.534 [2024-06-10 12:04:38.221324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.534 [2024-06-10 12:04:38.221336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.534 [2024-06-10 12:04:38.221577] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.534 [2024-06-10 12:04:38.221584] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.534 [2024-06-10 12:04:38.221587] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.221591] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ec730) on tqpair=0x23849e0 00:26:44.534 [2024-06-10 12:04:38.221599] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:44.534 [2024-06-10 12:04:38.221607] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:44.534 [2024-06-10 12:04:38.221613] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.221617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.221620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.534 [2024-06-10 12:04:38.221627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.534 [2024-06-10 12:04:38.221637] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.534 [2024-06-10 12:04:38.221854] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.534 [2024-06-10 12:04:38.221860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.534 [2024-06-10 12:04:38.221866] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.221870] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ec730) on tqpair=0x23849e0 00:26:44.534 [2024-06-10 12:04:38.221876] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:44.534 [2024-06-10 12:04:38.221883] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:44.534 [2024-06-10 12:04:38.221890] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.221893] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.221897] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.534 [2024-06-10 12:04:38.221903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.534 [2024-06-10 12:04:38.221913] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.534 [2024-06-10 12:04:38.222121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.534 [2024-06-10 12:04:38.222128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.534 [2024-06-10 12:04:38.222132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.222135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ec730) on tqpair=0x23849e0 00:26:44.534 [2024-06-10 12:04:38.222141] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:44.534 [2024-06-10 12:04:38.222149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.222153] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.222157] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.534 [2024-06-10 12:04:38.222163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.534 [2024-06-10 12:04:38.222173] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.534 [2024-06-10 12:04:38.222388] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.534 [2024-06-10 12:04:38.222395] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.534 [2024-06-10 12:04:38.222398] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.222402] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ec730) on tqpair=0x23849e0 00:26:44.534 [2024-06-10 12:04:38.222407] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:44.534 [2024-06-10 12:04:38.222412] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:44.534 [2024-06-10 12:04:38.222419] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:44.534 [2024-06-10 12:04:38.222524] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:44.534 [2024-06-10 12:04:38.222529] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:44.534 [2024-06-10 12:04:38.222537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.222541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.222544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.534 [2024-06-10 12:04:38.222551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.534 [2024-06-10 12:04:38.222563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.534 [2024-06-10 12:04:38.222773] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.534 [2024-06-10 12:04:38.222780] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.534 [2024-06-10 12:04:38.222783] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.222787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ec730) on tqpair=0x23849e0 00:26:44.534 [2024-06-10 12:04:38.222792] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:44.534 [2024-06-10 12:04:38.222801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.222804] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.534 [2024-06-10 12:04:38.222808] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.534 [2024-06-10 12:04:38.222814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.534 [2024-06-10 12:04:38.222824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.534 [2024-06-10 12:04:38.223030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.535 [2024-06-10 12:04:38.223037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.535 [2024-06-10 12:04:38.223040] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.223044] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ec730) on tqpair=0x23849e0 00:26:44.535 [2024-06-10 12:04:38.223049] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:44.535 [2024-06-10 12:04:38.223053] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:44.535 [2024-06-10 12:04:38.223061] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:44.535 [2024-06-10 12:04:38.223069] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:44.535 [2024-06-10 12:04:38.223077] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.223081] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.223084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.223091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.535 [2024-06-10 12:04:38.223101] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.535 [2024-06-10 12:04:38.223337] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.535 [2024-06-10 12:04:38.223343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.535 [2024-06-10 12:04:38.223347] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.223351] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23849e0): datao=0, datal=4096, cccid=0 00:26:44.535 [2024-06-10 12:04:38.223355] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23ec730) on tqpair(0x23849e0): expected_datao=0, payload_size=4096 00:26:44.535 [2024-06-10 12:04:38.227249] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227257] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.535 [2024-06-10 12:04:38.227271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.535 [2024-06-10 12:04:38.227275] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227281] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ec730) on tqpair=0x23849e0 00:26:44.535 [2024-06-10 12:04:38.227289] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:44.535 [2024-06-10 12:04:38.227297] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:44.535 [2024-06-10 12:04:38.227301] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:44.535 [2024-06-10 12:04:38.227306] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:44.535 [2024-06-10 12:04:38.227310] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:44.535 [2024-06-10 12:04:38.227315] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:44.535 [2024-06-10 12:04:38.227323] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:44.535 [2024-06-10 12:04:38.227330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227334] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227337] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.227344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:44.535 [2024-06-10 12:04:38.227355] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.535 [2024-06-10 12:04:38.227575] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.535 [2024-06-10 12:04:38.227581] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.535 [2024-06-10 12:04:38.227585] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ec730) on tqpair=0x23849e0 00:26:44.535 [2024-06-10 12:04:38.227596] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227600] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227603] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.227609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.535 [2024-06-10 12:04:38.227615] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227619] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.227628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.535 [2024-06-10 12:04:38.227634] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227637] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.227646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.535 [2024-06-10 12:04:38.227652] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227655] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227659] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.227664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.535 [2024-06-10 12:04:38.227671] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:44.535 [2024-06-10 12:04:38.227681] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:44.535 [2024-06-10 12:04:38.227687] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227694] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.227700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.535 [2024-06-10 12:04:38.227711] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec730, cid 0, qid 0 00:26:44.535 [2024-06-10 12:04:38.227716] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec890, cid 1, qid 0 00:26:44.535 [2024-06-10 12:04:38.227721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ec9f0, cid 2, qid 0 00:26:44.535 [2024-06-10 12:04:38.227726] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.535 [2024-06-10 12:04:38.227730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23eccb0, cid 4, qid 0 00:26:44.535 [2024-06-10 12:04:38.227964] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.535 [2024-06-10 12:04:38.227970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.535 [2024-06-10 12:04:38.227973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.227977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23eccb0) on tqpair=0x23849e0 00:26:44.535 [2024-06-10 12:04:38.227983] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:44.535 [2024-06-10 12:04:38.227987] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:44.535 [2024-06-10 12:04:38.227997] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.228001] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.228005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.228011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.535 [2024-06-10 12:04:38.228020] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23eccb0, cid 4, qid 0 00:26:44.535 [2024-06-10 12:04:38.228250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.535 [2024-06-10 12:04:38.228257] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.535 [2024-06-10 12:04:38.228261] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.228264] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23849e0): datao=0, datal=4096, cccid=4 00:26:44.535 [2024-06-10 12:04:38.228269] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23eccb0) on tqpair(0x23849e0): expected_datao=0, payload_size=4096 00:26:44.535 [2024-06-10 12:04:38.228293] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.228297] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.269410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.535 [2024-06-10 12:04:38.269420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.535 [2024-06-10 12:04:38.269423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.269427] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23eccb0) on tqpair=0x23849e0 00:26:44.535 [2024-06-10 12:04:38.269440] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:44.535 [2024-06-10 12:04:38.269462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.269466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.269470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.269477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.535 [2024-06-10 12:04:38.269483] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.269487] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.535 [2024-06-10 12:04:38.269490] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23849e0) 00:26:44.535 [2024-06-10 12:04:38.269497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.535 [2024-06-10 12:04:38.269514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23eccb0, cid 4, qid 0 00:26:44.535 [2024-06-10 12:04:38.269519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ece10, cid 5, qid 0 00:26:44.535 [2024-06-10 12:04:38.269705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.536 [2024-06-10 12:04:38.269711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.536 [2024-06-10 12:04:38.269714] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.536 [2024-06-10 12:04:38.269718] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23849e0): datao=0, datal=1024, cccid=4 00:26:44.536 [2024-06-10 12:04:38.269722] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23eccb0) on tqpair(0x23849e0): expected_datao=0, payload_size=1024 00:26:44.536 [2024-06-10 12:04:38.269729] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.536 [2024-06-10 12:04:38.269733] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.536 [2024-06-10 12:04:38.269739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.536 [2024-06-10 12:04:38.269745] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.536 [2024-06-10 12:04:38.269748] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.536 [2024-06-10 12:04:38.269752] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ece10) on tqpair=0x23849e0 00:26:44.803 [2024-06-10 12:04:38.310446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.803 [2024-06-10 12:04:38.310456] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.803 [2024-06-10 12:04:38.310460] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.310463] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23eccb0) on tqpair=0x23849e0 00:26:44.803 [2024-06-10 12:04:38.310475] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.310479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.310482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23849e0) 00:26:44.803 [2024-06-10 12:04:38.310489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.803 [2024-06-10 12:04:38.310503] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23eccb0, cid 4, qid 0 00:26:44.803 [2024-06-10 12:04:38.310709] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.803 [2024-06-10 12:04:38.310714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.803 [2024-06-10 12:04:38.310718] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.310721] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23849e0): datao=0, datal=3072, cccid=4 00:26:44.803 [2024-06-10 12:04:38.310726] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23eccb0) on tqpair(0x23849e0): expected_datao=0, payload_size=3072 00:26:44.803 [2024-06-10 12:04:38.310736] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.310740] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.310887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.803 [2024-06-10 12:04:38.310893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.803 [2024-06-10 12:04:38.310896] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.310900] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23eccb0) on tqpair=0x23849e0 00:26:44.803 [2024-06-10 12:04:38.310909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.310912] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.310916] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23849e0) 00:26:44.803 [2024-06-10 12:04:38.310922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.803 [2024-06-10 12:04:38.310934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23eccb0, cid 4, qid 0 00:26:44.803 [2024-06-10 12:04:38.311164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.803 [2024-06-10 12:04:38.311170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.803 [2024-06-10 12:04:38.311174] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.311177] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23849e0): datao=0, datal=8, cccid=4 00:26:44.803 [2024-06-10 12:04:38.311181] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23eccb0) on tqpair(0x23849e0): expected_datao=0, payload_size=8 00:26:44.803 [2024-06-10 12:04:38.311188] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.311192] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.355251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.803 [2024-06-10 12:04:38.355261] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.803 [2024-06-10 12:04:38.355265] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.803 [2024-06-10 12:04:38.355269] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23eccb0) on tqpair=0x23849e0 00:26:44.803 ===================================================== 00:26:44.803 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:44.803 ===================================================== 00:26:44.803 Controller Capabilities/Features 00:26:44.803 ================================ 00:26:44.803 Vendor ID: 0000 00:26:44.803 Subsystem Vendor ID: 0000 00:26:44.803 Serial Number: .................... 00:26:44.803 Model Number: ........................................ 00:26:44.803 Firmware Version: 24.01.1 00:26:44.803 Recommended Arb Burst: 0 00:26:44.803 IEEE OUI Identifier: 00 00 00 00:26:44.803 Multi-path I/O 00:26:44.803 May have multiple subsystem ports: No 00:26:44.803 May have multiple controllers: No 00:26:44.803 Associated with SR-IOV VF: No 00:26:44.803 Max Data Transfer Size: 131072 00:26:44.803 Max Number of Namespaces: 0 00:26:44.803 Max Number of I/O Queues: 1024 00:26:44.803 NVMe Specification Version (VS): 1.3 00:26:44.803 NVMe Specification Version (Identify): 1.3 00:26:44.803 Maximum Queue Entries: 128 00:26:44.803 Contiguous Queues Required: Yes 00:26:44.803 Arbitration Mechanisms Supported 00:26:44.803 Weighted Round Robin: Not Supported 00:26:44.803 Vendor Specific: Not Supported 00:26:44.803 Reset Timeout: 15000 ms 00:26:44.803 Doorbell Stride: 4 bytes 00:26:44.803 NVM Subsystem Reset: Not Supported 00:26:44.803 Command Sets Supported 00:26:44.803 NVM Command Set: Supported 00:26:44.803 Boot Partition: Not Supported 00:26:44.803 Memory Page Size Minimum: 4096 bytes 00:26:44.803 Memory Page Size Maximum: 4096 bytes 00:26:44.803 Persistent Memory Region: Not Supported 00:26:44.803 Optional Asynchronous Events Supported 00:26:44.803 Namespace Attribute Notices: Not Supported 00:26:44.803 Firmware Activation Notices: Not Supported 00:26:44.803 ANA Change Notices: Not Supported 00:26:44.803 PLE Aggregate Log Change Notices: Not Supported 00:26:44.803 LBA Status Info Alert Notices: Not Supported 00:26:44.803 EGE Aggregate Log Change Notices: Not Supported 00:26:44.803 Normal NVM Subsystem Shutdown event: Not Supported 00:26:44.803 Zone Descriptor Change Notices: Not Supported 00:26:44.803 Discovery Log Change Notices: Supported 00:26:44.803 Controller Attributes 00:26:44.803 128-bit Host Identifier: Not Supported 00:26:44.803 Non-Operational Permissive Mode: Not Supported 00:26:44.803 NVM Sets: Not Supported 00:26:44.803 Read Recovery Levels: Not Supported 00:26:44.803 Endurance Groups: Not Supported 00:26:44.803 Predictable Latency Mode: Not Supported 00:26:44.803 Traffic Based Keep ALive: Not Supported 00:26:44.803 Namespace Granularity: Not Supported 00:26:44.803 SQ Associations: Not Supported 00:26:44.803 UUID List: Not Supported 00:26:44.803 Multi-Domain Subsystem: Not Supported 00:26:44.803 Fixed Capacity Management: Not Supported 00:26:44.803 Variable Capacity Management: Not Supported 00:26:44.803 Delete Endurance Group: Not Supported 00:26:44.803 Delete NVM Set: Not Supported 00:26:44.803 Extended LBA Formats Supported: Not Supported 00:26:44.803 Flexible Data Placement Supported: Not Supported 00:26:44.803 00:26:44.803 Controller Memory Buffer Support 00:26:44.803 ================================ 00:26:44.803 Supported: No 00:26:44.803 00:26:44.803 Persistent Memory Region Support 00:26:44.803 ================================ 00:26:44.804 Supported: No 00:26:44.804 00:26:44.804 Admin Command Set Attributes 00:26:44.804 ============================ 00:26:44.804 Security Send/Receive: Not Supported 00:26:44.804 Format NVM: Not Supported 00:26:44.804 Firmware Activate/Download: Not Supported 00:26:44.804 Namespace Management: Not Supported 00:26:44.804 Device Self-Test: Not Supported 00:26:44.804 Directives: Not Supported 00:26:44.804 NVMe-MI: Not Supported 00:26:44.804 Virtualization Management: Not Supported 00:26:44.804 Doorbell Buffer Config: Not Supported 00:26:44.804 Get LBA Status Capability: Not Supported 00:26:44.804 Command & Feature Lockdown Capability: Not Supported 00:26:44.804 Abort Command Limit: 1 00:26:44.804 Async Event Request Limit: 4 00:26:44.804 Number of Firmware Slots: N/A 00:26:44.804 Firmware Slot 1 Read-Only: N/A 00:26:44.804 Firmware Activation Without Reset: N/A 00:26:44.804 Multiple Update Detection Support: N/A 00:26:44.804 Firmware Update Granularity: No Information Provided 00:26:44.804 Per-Namespace SMART Log: No 00:26:44.804 Asymmetric Namespace Access Log Page: Not Supported 00:26:44.804 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:44.804 Command Effects Log Page: Not Supported 00:26:44.804 Get Log Page Extended Data: Supported 00:26:44.804 Telemetry Log Pages: Not Supported 00:26:44.804 Persistent Event Log Pages: Not Supported 00:26:44.804 Supported Log Pages Log Page: May Support 00:26:44.804 Commands Supported & Effects Log Page: Not Supported 00:26:44.804 Feature Identifiers & Effects Log Page:May Support 00:26:44.804 NVMe-MI Commands & Effects Log Page: May Support 00:26:44.804 Data Area 4 for Telemetry Log: Not Supported 00:26:44.804 Error Log Page Entries Supported: 128 00:26:44.804 Keep Alive: Not Supported 00:26:44.804 00:26:44.804 NVM Command Set Attributes 00:26:44.804 ========================== 00:26:44.804 Submission Queue Entry Size 00:26:44.804 Max: 1 00:26:44.804 Min: 1 00:26:44.804 Completion Queue Entry Size 00:26:44.804 Max: 1 00:26:44.804 Min: 1 00:26:44.804 Number of Namespaces: 0 00:26:44.804 Compare Command: Not Supported 00:26:44.804 Write Uncorrectable Command: Not Supported 00:26:44.804 Dataset Management Command: Not Supported 00:26:44.804 Write Zeroes Command: Not Supported 00:26:44.804 Set Features Save Field: Not Supported 00:26:44.804 Reservations: Not Supported 00:26:44.804 Timestamp: Not Supported 00:26:44.804 Copy: Not Supported 00:26:44.804 Volatile Write Cache: Not Present 00:26:44.804 Atomic Write Unit (Normal): 1 00:26:44.804 Atomic Write Unit (PFail): 1 00:26:44.804 Atomic Compare & Write Unit: 1 00:26:44.804 Fused Compare & Write: Supported 00:26:44.804 Scatter-Gather List 00:26:44.804 SGL Command Set: Supported 00:26:44.804 SGL Keyed: Supported 00:26:44.804 SGL Bit Bucket Descriptor: Not Supported 00:26:44.804 SGL Metadata Pointer: Not Supported 00:26:44.804 Oversized SGL: Not Supported 00:26:44.804 SGL Metadata Address: Not Supported 00:26:44.804 SGL Offset: Supported 00:26:44.804 Transport SGL Data Block: Not Supported 00:26:44.804 Replay Protected Memory Block: Not Supported 00:26:44.804 00:26:44.804 Firmware Slot Information 00:26:44.804 ========================= 00:26:44.804 Active slot: 0 00:26:44.804 00:26:44.804 00:26:44.804 Error Log 00:26:44.804 ========= 00:26:44.804 00:26:44.804 Active Namespaces 00:26:44.804 ================= 00:26:44.804 Discovery Log Page 00:26:44.804 ================== 00:26:44.804 Generation Counter: 2 00:26:44.804 Number of Records: 2 00:26:44.804 Record Format: 0 00:26:44.804 00:26:44.804 Discovery Log Entry 0 00:26:44.804 ---------------------- 00:26:44.804 Transport Type: 3 (TCP) 00:26:44.804 Address Family: 1 (IPv4) 00:26:44.804 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:44.804 Entry Flags: 00:26:44.804 Duplicate Returned Information: 1 00:26:44.804 Explicit Persistent Connection Support for Discovery: 1 00:26:44.804 Transport Requirements: 00:26:44.804 Secure Channel: Not Required 00:26:44.804 Port ID: 0 (0x0000) 00:26:44.804 Controller ID: 65535 (0xffff) 00:26:44.804 Admin Max SQ Size: 128 00:26:44.804 Transport Service Identifier: 4420 00:26:44.804 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:44.804 Transport Address: 10.0.0.2 00:26:44.804 Discovery Log Entry 1 00:26:44.804 ---------------------- 00:26:44.804 Transport Type: 3 (TCP) 00:26:44.804 Address Family: 1 (IPv4) 00:26:44.804 Subsystem Type: 2 (NVM Subsystem) 00:26:44.804 Entry Flags: 00:26:44.804 Duplicate Returned Information: 0 00:26:44.804 Explicit Persistent Connection Support for Discovery: 0 00:26:44.804 Transport Requirements: 00:26:44.804 Secure Channel: Not Required 00:26:44.804 Port ID: 0 (0x0000) 00:26:44.804 Controller ID: 65535 (0xffff) 00:26:44.804 Admin Max SQ Size: 128 00:26:44.804 Transport Service Identifier: 4420 00:26:44.804 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:44.804 Transport Address: 10.0.0.2 [2024-06-10 12:04:38.355356] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:44.804 [2024-06-10 12:04:38.355369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.804 [2024-06-10 12:04:38.355376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.804 [2024-06-10 12:04:38.355382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.804 [2024-06-10 12:04:38.355388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.804 [2024-06-10 12:04:38.355398] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.355402] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.355406] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.355413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.355425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.355528] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.355534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.355538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.355542] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.355552] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.355555] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.355559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.355566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.355578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.355784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.355791] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.355794] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.355798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.355803] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:44.804 [2024-06-10 12:04:38.355808] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:44.804 [2024-06-10 12:04:38.355816] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.355821] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.355824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.355831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.355840] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.356059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.356065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.356069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356072] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.356083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356086] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356090] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.356097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.356106] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.356329] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.356336] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.356339] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356343] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.356353] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356357] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356360] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.356367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.356377] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.356595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.356603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.356607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356611] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.356621] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356624] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356628] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.356635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.356644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.356859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.356865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.356869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.356883] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356887] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.356890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.356897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.356906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.357133] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.357139] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.357143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357147] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.357157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.357171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.357180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.357375] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.357381] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.357385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357389] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.357399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357406] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.357413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.357423] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.357641] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.357647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.357653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.357667] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357671] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.357681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.357690] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.357899] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.357905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.357909] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357913] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.357923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.357930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.357937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.357946] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.358129] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.358135] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.804 [2024-06-10 12:04:38.358138] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.358142] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.804 [2024-06-10 12:04:38.358152] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.358156] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.804 [2024-06-10 12:04:38.358159] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.804 [2024-06-10 12:04:38.358166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.804 [2024-06-10 12:04:38.358175] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.804 [2024-06-10 12:04:38.358359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.804 [2024-06-10 12:04:38.358365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.358369] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.358373] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.805 [2024-06-10 12:04:38.358383] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.358386] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.358390] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.805 [2024-06-10 12:04:38.358397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.358406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.805 [2024-06-10 12:04:38.358628] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.358634] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.358637] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.358645] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.805 [2024-06-10 12:04:38.358655] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.358659] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.358662] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.805 [2024-06-10 12:04:38.358669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.358679] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.805 [2024-06-10 12:04:38.358890] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.358896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.358900] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.358903] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.805 [2024-06-10 12:04:38.358913] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.358917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.358920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.805 [2024-06-10 12:04:38.358927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.358936] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.805 [2024-06-10 12:04:38.359163] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.359169] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.359173] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.359177] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.805 [2024-06-10 12:04:38.359187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.359191] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.359194] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23849e0) 00:26:44.805 [2024-06-10 12:04:38.359201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.359210] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ecb50, cid 3, qid 0 00:26:44.805 [2024-06-10 12:04:38.363249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.363257] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.363260] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.363264] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23ecb50) on tqpair=0x23849e0 00:26:44.805 [2024-06-10 12:04:38.363273] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:26:44.805 00:26:44.805 12:04:38 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:44.805 [2024-06-10 12:04:38.400545] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:44.805 [2024-06-10 12:04:38.400587] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079803 ] 00:26:44.805 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.805 [2024-06-10 12:04:38.438306] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:44.805 [2024-06-10 12:04:38.438351] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:44.805 [2024-06-10 12:04:38.438356] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:44.805 [2024-06-10 12:04:38.438367] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:44.805 [2024-06-10 12:04:38.438373] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:44.805 [2024-06-10 12:04:38.438827] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:44.805 [2024-06-10 12:04:38.438850] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8ed9e0 0 00:26:44.805 [2024-06-10 12:04:38.452249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:44.805 [2024-06-10 12:04:38.452261] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:44.805 [2024-06-10 12:04:38.452266] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:44.805 [2024-06-10 12:04:38.452269] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:44.805 [2024-06-10 12:04:38.452301] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.452306] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.452310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.805 [2024-06-10 12:04:38.452321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:44.805 [2024-06-10 12:04:38.452337] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.805 [2024-06-10 12:04:38.460251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.460260] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.460264] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460268] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955730) on tqpair=0x8ed9e0 00:26:44.805 [2024-06-10 12:04:38.460277] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:44.805 [2024-06-10 12:04:38.460283] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:44.805 [2024-06-10 12:04:38.460288] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:44.805 [2024-06-10 12:04:38.460301] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460305] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460309] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.805 [2024-06-10 12:04:38.460316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.460330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.805 [2024-06-10 12:04:38.460499] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.460506] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.460509] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460513] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955730) on tqpair=0x8ed9e0 00:26:44.805 [2024-06-10 12:04:38.460520] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:44.805 [2024-06-10 12:04:38.460528] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:44.805 [2024-06-10 12:04:38.460537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.805 [2024-06-10 12:04:38.460551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.460562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.805 [2024-06-10 12:04:38.460723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.460729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.460732] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460736] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955730) on tqpair=0x8ed9e0 00:26:44.805 [2024-06-10 12:04:38.460741] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:44.805 [2024-06-10 12:04:38.460749] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:44.805 [2024-06-10 12:04:38.460756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460763] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.805 [2024-06-10 12:04:38.460769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.460779] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.805 [2024-06-10 12:04:38.460939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.460946] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.460949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955730) on tqpair=0x8ed9e0 00:26:44.805 [2024-06-10 12:04:38.460958] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:44.805 [2024-06-10 12:04:38.460967] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.460974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.805 [2024-06-10 12:04:38.460981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.460990] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.805 [2024-06-10 12:04:38.461174] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.461180] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.461184] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.461187] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955730) on tqpair=0x8ed9e0 00:26:44.805 [2024-06-10 12:04:38.461192] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:44.805 [2024-06-10 12:04:38.461196] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:44.805 [2024-06-10 12:04:38.461204] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:44.805 [2024-06-10 12:04:38.461309] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:44.805 [2024-06-10 12:04:38.461315] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:44.805 [2024-06-10 12:04:38.461323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.461327] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.461330] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.805 [2024-06-10 12:04:38.461337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.461347] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.805 [2024-06-10 12:04:38.461541] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.461548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.461551] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.461555] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955730) on tqpair=0x8ed9e0 00:26:44.805 [2024-06-10 12:04:38.461559] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:44.805 [2024-06-10 12:04:38.461568] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.461572] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.461575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.805 [2024-06-10 12:04:38.461582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.461592] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.805 [2024-06-10 12:04:38.461809] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.461815] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.461819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.461822] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955730) on tqpair=0x8ed9e0 00:26:44.805 [2024-06-10 12:04:38.461827] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:44.805 [2024-06-10 12:04:38.461831] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:44.805 [2024-06-10 12:04:38.461839] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:44.805 [2024-06-10 12:04:38.461850] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:44.805 [2024-06-10 12:04:38.461858] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.461862] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.461865] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.805 [2024-06-10 12:04:38.461872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.805 [2024-06-10 12:04:38.461882] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.805 [2024-06-10 12:04:38.462112] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.805 [2024-06-10 12:04:38.462119] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.805 [2024-06-10 12:04:38.462122] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.462126] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ed9e0): datao=0, datal=4096, cccid=0 00:26:44.805 [2024-06-10 12:04:38.462133] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x955730) on tqpair(0x8ed9e0): expected_datao=0, payload_size=4096 00:26:44.805 [2024-06-10 12:04:38.462141] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.462145] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.507248] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.507258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.507262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.507266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955730) on tqpair=0x8ed9e0 00:26:44.805 [2024-06-10 12:04:38.507274] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:44.805 [2024-06-10 12:04:38.507282] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:44.805 [2024-06-10 12:04:38.507286] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:44.805 [2024-06-10 12:04:38.507290] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:44.805 [2024-06-10 12:04:38.507295] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:44.805 [2024-06-10 12:04:38.507299] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:44.805 [2024-06-10 12:04:38.507308] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:44.805 [2024-06-10 12:04:38.507315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.507318] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.507322] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.805 [2024-06-10 12:04:38.507329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:44.805 [2024-06-10 12:04:38.507341] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.805 [2024-06-10 12:04:38.507526] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.805 [2024-06-10 12:04:38.507532] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.805 [2024-06-10 12:04:38.507536] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.805 [2024-06-10 12:04:38.507540] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955730) on tqpair=0x8ed9e0 00:26:44.806 [2024-06-10 12:04:38.507546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507550] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507553] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.507559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.806 [2024-06-10 12:04:38.507565] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507569] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507572] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.507578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.806 [2024-06-10 12:04:38.507583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507587] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507590] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.507596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.806 [2024-06-10 12:04:38.507604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507608] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507611] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.507616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.806 [2024-06-10 12:04:38.507621] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.507631] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.507638] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507641] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507645] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.507651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.806 [2024-06-10 12:04:38.507663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955730, cid 0, qid 0 00:26:44.806 [2024-06-10 12:04:38.507668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955890, cid 1, qid 0 00:26:44.806 [2024-06-10 12:04:38.507673] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9559f0, cid 2, qid 0 00:26:44.806 [2024-06-10 12:04:38.507677] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.806 [2024-06-10 12:04:38.507682] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955cb0, cid 4, qid 0 00:26:44.806 [2024-06-10 12:04:38.507907] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.806 [2024-06-10 12:04:38.507913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.806 [2024-06-10 12:04:38.507916] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507920] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955cb0) on tqpair=0x8ed9e0 00:26:44.806 [2024-06-10 12:04:38.507925] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:44.806 [2024-06-10 12:04:38.507929] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.507937] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.507943] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.507949] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.507956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.507962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:44.806 [2024-06-10 12:04:38.507972] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955cb0, cid 4, qid 0 00:26:44.806 [2024-06-10 12:04:38.508156] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.806 [2024-06-10 12:04:38.508162] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.806 [2024-06-10 12:04:38.508165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955cb0) on tqpair=0x8ed9e0 00:26:44.806 [2024-06-10 12:04:38.508222] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.508232] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.508238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508255] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508259] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.508265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.806 [2024-06-10 12:04:38.508276] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955cb0, cid 4, qid 0 00:26:44.806 [2024-06-10 12:04:38.508492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.806 [2024-06-10 12:04:38.508499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.806 [2024-06-10 12:04:38.508502] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508506] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ed9e0): datao=0, datal=4096, cccid=4 00:26:44.806 [2024-06-10 12:04:38.508510] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x955cb0) on tqpair(0x8ed9e0): expected_datao=0, payload_size=4096 00:26:44.806 [2024-06-10 12:04:38.508518] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508521] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508681] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.806 [2024-06-10 12:04:38.508688] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.806 [2024-06-10 12:04:38.508691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508695] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955cb0) on tqpair=0x8ed9e0 00:26:44.806 [2024-06-10 12:04:38.508703] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:44.806 [2024-06-10 12:04:38.508712] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.508721] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.508727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508731] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508734] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.508741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.806 [2024-06-10 12:04:38.508751] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955cb0, cid 4, qid 0 00:26:44.806 [2024-06-10 12:04:38.508927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.806 [2024-06-10 12:04:38.508934] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.806 [2024-06-10 12:04:38.508937] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508941] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ed9e0): datao=0, datal=4096, cccid=4 00:26:44.806 [2024-06-10 12:04:38.508945] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x955cb0) on tqpair(0x8ed9e0): expected_datao=0, payload_size=4096 00:26:44.806 [2024-06-10 12:04:38.508952] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.508956] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.806 [2024-06-10 12:04:38.509120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.806 [2024-06-10 12:04:38.509124] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955cb0) on tqpair=0x8ed9e0 00:26:44.806 [2024-06-10 12:04:38.509140] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.509149] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.509155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509163] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.509169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.806 [2024-06-10 12:04:38.509180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955cb0, cid 4, qid 0 00:26:44.806 [2024-06-10 12:04:38.509396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.806 [2024-06-10 12:04:38.509403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.806 [2024-06-10 12:04:38.509407] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509410] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ed9e0): datao=0, datal=4096, cccid=4 00:26:44.806 [2024-06-10 12:04:38.509414] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x955cb0) on tqpair(0x8ed9e0): expected_datao=0, payload_size=4096 00:26:44.806 [2024-06-10 12:04:38.509491] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509494] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509628] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.806 [2024-06-10 12:04:38.509635] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.806 [2024-06-10 12:04:38.509638] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509642] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955cb0) on tqpair=0x8ed9e0 00:26:44.806 [2024-06-10 12:04:38.509649] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.509656] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.509664] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.509670] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.509675] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.509680] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:44.806 [2024-06-10 12:04:38.509684] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:44.806 [2024-06-10 12:04:38.509689] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:44.806 [2024-06-10 12:04:38.509702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509709] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.509717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.806 [2024-06-10 12:04:38.509723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509727] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.509736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.806 [2024-06-10 12:04:38.509750] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955cb0, cid 4, qid 0 00:26:44.806 [2024-06-10 12:04:38.509755] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955e10, cid 5, qid 0 00:26:44.806 [2024-06-10 12:04:38.509964] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.806 [2024-06-10 12:04:38.509970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.806 [2024-06-10 12:04:38.509973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955cb0) on tqpair=0x8ed9e0 00:26:44.806 [2024-06-10 12:04:38.509984] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.806 [2024-06-10 12:04:38.509989] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.806 [2024-06-10 12:04:38.509993] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.509996] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955e10) on tqpair=0x8ed9e0 00:26:44.806 [2024-06-10 12:04:38.510005] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.510009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.510012] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ed9e0) 00:26:44.806 [2024-06-10 12:04:38.510018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.806 [2024-06-10 12:04:38.510028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955e10, cid 5, qid 0 00:26:44.806 [2024-06-10 12:04:38.510192] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.806 [2024-06-10 12:04:38.510198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.806 [2024-06-10 12:04:38.510202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.510205] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955e10) on tqpair=0x8ed9e0 00:26:44.806 [2024-06-10 12:04:38.510214] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.806 [2024-06-10 12:04:38.510218] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510221] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ed9e0) 00:26:44.807 [2024-06-10 12:04:38.510227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.807 [2024-06-10 12:04:38.510237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955e10, cid 5, qid 0 00:26:44.807 [2024-06-10 12:04:38.510414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.807 [2024-06-10 12:04:38.510421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.807 [2024-06-10 12:04:38.510424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955e10) on tqpair=0x8ed9e0 00:26:44.807 [2024-06-10 12:04:38.510436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510440] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ed9e0) 00:26:44.807 [2024-06-10 12:04:38.510450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.807 [2024-06-10 12:04:38.510461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955e10, cid 5, qid 0 00:26:44.807 [2024-06-10 12:04:38.510674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.807 [2024-06-10 12:04:38.510680] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.807 [2024-06-10 12:04:38.510684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510687] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955e10) on tqpair=0x8ed9e0 00:26:44.807 [2024-06-10 12:04:38.510698] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510702] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510706] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ed9e0) 00:26:44.807 [2024-06-10 12:04:38.510712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.807 [2024-06-10 12:04:38.510719] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510723] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510726] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ed9e0) 00:26:44.807 [2024-06-10 12:04:38.510732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.807 [2024-06-10 12:04:38.510739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8ed9e0) 00:26:44.807 [2024-06-10 12:04:38.510752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.807 [2024-06-10 12:04:38.510759] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.510766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8ed9e0) 00:26:44.807 [2024-06-10 12:04:38.510772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.807 [2024-06-10 12:04:38.510783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955e10, cid 5, qid 0 00:26:44.807 [2024-06-10 12:04:38.510787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955cb0, cid 4, qid 0 00:26:44.807 [2024-06-10 12:04:38.510792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955f70, cid 6, qid 0 00:26:44.807 [2024-06-10 12:04:38.510797] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9560d0, cid 7, qid 0 00:26:44.807 [2024-06-10 12:04:38.511074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.807 [2024-06-10 12:04:38.511080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.807 [2024-06-10 12:04:38.511084] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511087] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ed9e0): datao=0, datal=8192, cccid=5 00:26:44.807 [2024-06-10 12:04:38.511091] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x955e10) on tqpair(0x8ed9e0): expected_datao=0, payload_size=8192 00:26:44.807 [2024-06-10 12:04:38.511148] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511153] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511158] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.807 [2024-06-10 12:04:38.511164] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.807 [2024-06-10 12:04:38.511169] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511173] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ed9e0): datao=0, datal=512, cccid=4 00:26:44.807 [2024-06-10 12:04:38.511177] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x955cb0) on tqpair(0x8ed9e0): expected_datao=0, payload_size=512 00:26:44.807 [2024-06-10 12:04:38.511184] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511187] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.807 [2024-06-10 12:04:38.511198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.807 [2024-06-10 12:04:38.511202] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511205] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ed9e0): datao=0, datal=512, cccid=6 00:26:44.807 [2024-06-10 12:04:38.511209] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x955f70) on tqpair(0x8ed9e0): expected_datao=0, payload_size=512 00:26:44.807 [2024-06-10 12:04:38.511216] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511220] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.807 [2024-06-10 12:04:38.511231] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.807 [2024-06-10 12:04:38.511234] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.511238] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ed9e0): datao=0, datal=4096, cccid=7 00:26:44.807 [2024-06-10 12:04:38.515247] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9560d0) on tqpair(0x8ed9e0): expected_datao=0, payload_size=4096 00:26:44.807 [2024-06-10 12:04:38.515255] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.515259] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.515266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.807 [2024-06-10 12:04:38.515272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.807 [2024-06-10 12:04:38.515275] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.515279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955e10) on tqpair=0x8ed9e0 00:26:44.807 [2024-06-10 12:04:38.515293] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.807 [2024-06-10 12:04:38.515299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.807 [2024-06-10 12:04:38.515302] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.515306] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955cb0) on tqpair=0x8ed9e0 00:26:44.807 [2024-06-10 12:04:38.515314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.807 [2024-06-10 12:04:38.515320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.807 [2024-06-10 12:04:38.515323] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.515327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955f70) on tqpair=0x8ed9e0 00:26:44.807 [2024-06-10 12:04:38.515333] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.807 [2024-06-10 12:04:38.515339] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.807 [2024-06-10 12:04:38.515343] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.807 [2024-06-10 12:04:38.515346] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9560d0) on tqpair=0x8ed9e0 00:26:44.807 ===================================================== 00:26:44.807 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.807 ===================================================== 00:26:44.807 Controller Capabilities/Features 00:26:44.807 ================================ 00:26:44.807 Vendor ID: 8086 00:26:44.807 Subsystem Vendor ID: 8086 00:26:44.807 Serial Number: SPDK00000000000001 00:26:44.807 Model Number: SPDK bdev Controller 00:26:44.807 Firmware Version: 24.01.1 00:26:44.807 Recommended Arb Burst: 6 00:26:44.807 IEEE OUI Identifier: e4 d2 5c 00:26:44.807 Multi-path I/O 00:26:44.807 May have multiple subsystem ports: Yes 00:26:44.807 May have multiple controllers: Yes 00:26:44.807 Associated with SR-IOV VF: No 00:26:44.807 Max Data Transfer Size: 131072 00:26:44.807 Max Number of Namespaces: 32 00:26:44.807 Max Number of I/O Queues: 127 00:26:44.807 NVMe Specification Version (VS): 1.3 00:26:44.807 NVMe Specification Version (Identify): 1.3 00:26:44.807 Maximum Queue Entries: 128 00:26:44.807 Contiguous Queues Required: Yes 00:26:44.807 Arbitration Mechanisms Supported 00:26:44.807 Weighted Round Robin: Not Supported 00:26:44.807 Vendor Specific: Not Supported 00:26:44.807 Reset Timeout: 15000 ms 00:26:44.807 Doorbell Stride: 4 bytes 00:26:44.807 NVM Subsystem Reset: Not Supported 00:26:44.807 Command Sets Supported 00:26:44.807 NVM Command Set: Supported 00:26:44.807 Boot Partition: Not Supported 00:26:44.807 Memory Page Size Minimum: 4096 bytes 00:26:44.807 Memory Page Size Maximum: 4096 bytes 00:26:44.807 Persistent Memory Region: Not Supported 00:26:44.807 Optional Asynchronous Events Supported 00:26:44.807 Namespace Attribute Notices: Supported 00:26:44.807 Firmware Activation Notices: Not Supported 00:26:44.807 ANA Change Notices: Not Supported 00:26:44.807 PLE Aggregate Log Change Notices: Not Supported 00:26:44.807 LBA Status Info Alert Notices: Not Supported 00:26:44.807 EGE Aggregate Log Change Notices: Not Supported 00:26:44.807 Normal NVM Subsystem Shutdown event: Not Supported 00:26:44.807 Zone Descriptor Change Notices: Not Supported 00:26:44.807 Discovery Log Change Notices: Not Supported 00:26:44.807 Controller Attributes 00:26:44.807 128-bit Host Identifier: Supported 00:26:44.807 Non-Operational Permissive Mode: Not Supported 00:26:44.807 NVM Sets: Not Supported 00:26:44.807 Read Recovery Levels: Not Supported 00:26:44.807 Endurance Groups: Not Supported 00:26:44.807 Predictable Latency Mode: Not Supported 00:26:44.807 Traffic Based Keep ALive: Not Supported 00:26:44.807 Namespace Granularity: Not Supported 00:26:44.807 SQ Associations: Not Supported 00:26:44.807 UUID List: Not Supported 00:26:44.807 Multi-Domain Subsystem: Not Supported 00:26:44.807 Fixed Capacity Management: Not Supported 00:26:44.807 Variable Capacity Management: Not Supported 00:26:44.807 Delete Endurance Group: Not Supported 00:26:44.807 Delete NVM Set: Not Supported 00:26:44.807 Extended LBA Formats Supported: Not Supported 00:26:44.807 Flexible Data Placement Supported: Not Supported 00:26:44.807 00:26:44.807 Controller Memory Buffer Support 00:26:44.807 ================================ 00:26:44.807 Supported: No 00:26:44.807 00:26:44.807 Persistent Memory Region Support 00:26:44.807 ================================ 00:26:44.807 Supported: No 00:26:44.807 00:26:44.807 Admin Command Set Attributes 00:26:44.807 ============================ 00:26:44.807 Security Send/Receive: Not Supported 00:26:44.807 Format NVM: Not Supported 00:26:44.807 Firmware Activate/Download: Not Supported 00:26:44.807 Namespace Management: Not Supported 00:26:44.807 Device Self-Test: Not Supported 00:26:44.807 Directives: Not Supported 00:26:44.807 NVMe-MI: Not Supported 00:26:44.807 Virtualization Management: Not Supported 00:26:44.807 Doorbell Buffer Config: Not Supported 00:26:44.807 Get LBA Status Capability: Not Supported 00:26:44.807 Command & Feature Lockdown Capability: Not Supported 00:26:44.807 Abort Command Limit: 4 00:26:44.807 Async Event Request Limit: 4 00:26:44.807 Number of Firmware Slots: N/A 00:26:44.807 Firmware Slot 1 Read-Only: N/A 00:26:44.807 Firmware Activation Without Reset: N/A 00:26:44.807 Multiple Update Detection Support: N/A 00:26:44.807 Firmware Update Granularity: No Information Provided 00:26:44.807 Per-Namespace SMART Log: No 00:26:44.807 Asymmetric Namespace Access Log Page: Not Supported 00:26:44.807 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:44.807 Command Effects Log Page: Supported 00:26:44.807 Get Log Page Extended Data: Supported 00:26:44.807 Telemetry Log Pages: Not Supported 00:26:44.807 Persistent Event Log Pages: Not Supported 00:26:44.807 Supported Log Pages Log Page: May Support 00:26:44.807 Commands Supported & Effects Log Page: Not Supported 00:26:44.807 Feature Identifiers & Effects Log Page:May Support 00:26:44.807 NVMe-MI Commands & Effects Log Page: May Support 00:26:44.807 Data Area 4 for Telemetry Log: Not Supported 00:26:44.807 Error Log Page Entries Supported: 128 00:26:44.807 Keep Alive: Supported 00:26:44.807 Keep Alive Granularity: 10000 ms 00:26:44.807 00:26:44.807 NVM Command Set Attributes 00:26:44.807 ========================== 00:26:44.807 Submission Queue Entry Size 00:26:44.807 Max: 64 00:26:44.807 Min: 64 00:26:44.807 Completion Queue Entry Size 00:26:44.807 Max: 16 00:26:44.807 Min: 16 00:26:44.807 Number of Namespaces: 32 00:26:44.807 Compare Command: Supported 00:26:44.807 Write Uncorrectable Command: Not Supported 00:26:44.807 Dataset Management Command: Supported 00:26:44.807 Write Zeroes Command: Supported 00:26:44.807 Set Features Save Field: Not Supported 00:26:44.807 Reservations: Supported 00:26:44.807 Timestamp: Not Supported 00:26:44.807 Copy: Supported 00:26:44.807 Volatile Write Cache: Present 00:26:44.807 Atomic Write Unit (Normal): 1 00:26:44.807 Atomic Write Unit (PFail): 1 00:26:44.807 Atomic Compare & Write Unit: 1 00:26:44.807 Fused Compare & Write: Supported 00:26:44.807 Scatter-Gather List 00:26:44.807 SGL Command Set: Supported 00:26:44.807 SGL Keyed: Supported 00:26:44.807 SGL Bit Bucket Descriptor: Not Supported 00:26:44.807 SGL Metadata Pointer: Not Supported 00:26:44.807 Oversized SGL: Not Supported 00:26:44.807 SGL Metadata Address: Not Supported 00:26:44.807 SGL Offset: Supported 00:26:44.807 Transport SGL Data Block: Not Supported 00:26:44.807 Replay Protected Memory Block: Not Supported 00:26:44.807 00:26:44.807 Firmware Slot Information 00:26:44.807 ========================= 00:26:44.807 Active slot: 1 00:26:44.807 Slot 1 Firmware Revision: 24.01.1 00:26:44.807 00:26:44.807 00:26:44.807 Commands Supported and Effects 00:26:44.807 ============================== 00:26:44.807 Admin Commands 00:26:44.807 -------------- 00:26:44.807 Get Log Page (02h): Supported 00:26:44.807 Identify (06h): Supported 00:26:44.807 Abort (08h): Supported 00:26:44.807 Set Features (09h): Supported 00:26:44.807 Get Features (0Ah): Supported 00:26:44.807 Asynchronous Event Request (0Ch): Supported 00:26:44.807 Keep Alive (18h): Supported 00:26:44.807 I/O Commands 00:26:44.807 ------------ 00:26:44.807 Flush (00h): Supported LBA-Change 00:26:44.807 Write (01h): Supported LBA-Change 00:26:44.807 Read (02h): Supported 00:26:44.807 Compare (05h): Supported 00:26:44.807 Write Zeroes (08h): Supported LBA-Change 00:26:44.807 Dataset Management (09h): Supported LBA-Change 00:26:44.807 Copy (19h): Supported LBA-Change 00:26:44.807 Unknown (79h): Supported LBA-Change 00:26:44.807 Unknown (7Ah): Supported 00:26:44.807 00:26:44.807 Error Log 00:26:44.807 ========= 00:26:44.807 00:26:44.807 Arbitration 00:26:44.807 =========== 00:26:44.807 Arbitration Burst: 1 00:26:44.807 00:26:44.807 Power Management 00:26:44.807 ================ 00:26:44.807 Number of Power States: 1 00:26:44.807 Current Power State: Power State #0 00:26:44.807 Power State #0: 00:26:44.807 Max Power: 0.00 W 00:26:44.807 Non-Operational State: Operational 00:26:44.808 Entry Latency: Not Reported 00:26:44.808 Exit Latency: Not Reported 00:26:44.808 Relative Read Throughput: 0 00:26:44.808 Relative Read Latency: 0 00:26:44.808 Relative Write Throughput: 0 00:26:44.808 Relative Write Latency: 0 00:26:44.808 Idle Power: Not Reported 00:26:44.808 Active Power: Not Reported 00:26:44.808 Non-Operational Permissive Mode: Not Supported 00:26:44.808 00:26:44.808 Health Information 00:26:44.808 ================== 00:26:44.808 Critical Warnings: 00:26:44.808 Available Spare Space: OK 00:26:44.808 Temperature: OK 00:26:44.808 Device Reliability: OK 00:26:44.808 Read Only: No 00:26:44.808 Volatile Memory Backup: OK 00:26:44.808 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:44.808 Temperature Threshold: [2024-06-10 12:04:38.515449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.515455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.515458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.515466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.515479] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9560d0, cid 7, qid 0 00:26:44.808 [2024-06-10 12:04:38.515678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.515684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.515688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.515691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9560d0) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.515721] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:44.808 [2024-06-10 12:04:38.515732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.808 [2024-06-10 12:04:38.515738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.808 [2024-06-10 12:04:38.515744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.808 [2024-06-10 12:04:38.515750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.808 [2024-06-10 12:04:38.515758] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.515762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.515765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.515772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.515784] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.515945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.515951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.515955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.515958] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.515965] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.515969] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.515972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.515979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.515991] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.516173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.516179] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.516183] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516186] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.516191] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:44.808 [2024-06-10 12:04:38.516196] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:44.808 [2024-06-10 12:04:38.516205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516212] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.516219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.516230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.516409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.516416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.516419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516423] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.516432] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516440] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.516446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.516456] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.516636] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.516642] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.516645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.516658] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516662] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516666] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.516672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.516681] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.516864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.516870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.516874] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516877] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.516886] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516890] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.516894] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.516900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.516910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.517086] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.517093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.517096] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.517109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.517122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.517134] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.517317] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.517323] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.517327] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517330] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.517340] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517344] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517347] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.517354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.517363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.517587] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.517593] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.517597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517600] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.517610] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517613] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517617] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.517623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.517633] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.517803] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.517809] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.517813] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.517826] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517829] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.517833] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.517839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.517849] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.518037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.518043] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.518047] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518050] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.518060] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.518073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.518083] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.518260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.518267] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.518270] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518274] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.518283] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518287] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518290] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.518297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.518307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.518564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.518570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.518573] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518577] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.518586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518590] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.518600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.518609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.518771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.518777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.518781] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518784] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.518793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.518801] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.518807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.518816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.518993] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.518999] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.519003] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.519006] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.519015] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.519019] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.519022] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.519029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.519038] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.519218] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.519225] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.519229] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.519233] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.523246] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.523252] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.523255] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ed9e0) 00:26:44.808 [2024-06-10 12:04:38.523262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.808 [2024-06-10 12:04:38.523273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x955b50, cid 3, qid 0 00:26:44.808 [2024-06-10 12:04:38.523447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.808 [2024-06-10 12:04:38.523453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.808 [2024-06-10 12:04:38.523457] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.808 [2024-06-10 12:04:38.523460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x955b50) on tqpair=0x8ed9e0 00:26:44.808 [2024-06-10 12:04:38.523468] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:26:44.808 0 Kelvin (-273 Celsius) 00:26:44.808 Available Spare: 0% 00:26:44.808 Available Spare Threshold: 0% 00:26:44.808 Life Percentage Used: 0% 00:26:44.808 Data Units Read: 0 00:26:44.808 Data Units Written: 0 00:26:44.808 Host Read Commands: 0 00:26:44.808 Host Write Commands: 0 00:26:44.808 Controller Busy Time: 0 minutes 00:26:44.808 Power Cycles: 0 00:26:44.808 Power On Hours: 0 hours 00:26:44.808 Unsafe Shutdowns: 0 00:26:44.808 Unrecoverable Media Errors: 0 00:26:44.808 Lifetime Error Log Entries: 0 00:26:44.808 Warning Temperature Time: 0 minutes 00:26:44.808 Critical Temperature Time: 0 minutes 00:26:44.809 00:26:44.809 Number of Queues 00:26:44.809 ================ 00:26:44.809 Number of I/O Submission Queues: 127 00:26:44.809 Number of I/O Completion Queues: 127 00:26:44.809 00:26:44.809 Active Namespaces 00:26:44.809 ================= 00:26:44.809 Namespace ID:1 00:26:44.809 Error Recovery Timeout: Unlimited 00:26:44.809 Command Set Identifier: NVM (00h) 00:26:44.809 Deallocate: Supported 00:26:44.809 Deallocated/Unwritten Error: Not Supported 00:26:44.809 Deallocated Read Value: Unknown 00:26:44.809 Deallocate in Write Zeroes: Not Supported 00:26:44.809 Deallocated Guard Field: 0xFFFF 00:26:44.809 Flush: Supported 00:26:44.809 Reservation: Supported 00:26:44.809 Namespace Sharing Capabilities: Multiple Controllers 00:26:44.809 Size (in LBAs): 131072 (0GiB) 00:26:44.809 Capacity (in LBAs): 131072 (0GiB) 00:26:44.809 Utilization (in LBAs): 131072 (0GiB) 00:26:44.809 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:44.809 EUI64: ABCDEF0123456789 00:26:44.809 UUID: d912572c-febe-488d-b9d6-5cf1c4a20024 00:26:44.809 Thin Provisioning: Not Supported 00:26:44.809 Per-NS Atomic Units: Yes 00:26:44.809 Atomic Boundary Size (Normal): 0 00:26:44.809 Atomic Boundary Size (PFail): 0 00:26:44.809 Atomic Boundary Offset: 0 00:26:44.809 Maximum Single Source Range Length: 65535 00:26:44.809 Maximum Copy Length: 65535 00:26:44.809 Maximum Source Range Count: 1 00:26:44.809 NGUID/EUI64 Never Reused: No 00:26:44.809 Namespace Write Protected: No 00:26:44.809 Number of LBA Formats: 1 00:26:44.809 Current LBA Format: LBA Format #00 00:26:44.809 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:44.809 00:26:44.809 12:04:38 -- host/identify.sh@51 -- # sync 00:26:44.809 12:04:38 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.809 12:04:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.809 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.809 12:04:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.809 12:04:38 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:44.809 12:04:38 -- host/identify.sh@56 -- # nvmftestfini 00:26:44.809 12:04:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:44.809 12:04:38 -- nvmf/common.sh@116 -- # sync 00:26:44.809 12:04:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:44.809 12:04:38 -- nvmf/common.sh@119 -- # set +e 00:26:44.809 12:04:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:44.809 12:04:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:44.809 rmmod nvme_tcp 00:26:45.070 rmmod nvme_fabrics 00:26:45.070 rmmod nvme_keyring 00:26:45.070 12:04:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:45.070 12:04:38 -- nvmf/common.sh@123 -- # set -e 00:26:45.070 12:04:38 -- nvmf/common.sh@124 -- # return 0 00:26:45.070 12:04:38 -- nvmf/common.sh@477 -- # '[' -n 2079442 ']' 00:26:45.070 12:04:38 -- nvmf/common.sh@478 -- # killprocess 2079442 00:26:45.070 12:04:38 -- common/autotest_common.sh@926 -- # '[' -z 2079442 ']' 00:26:45.070 12:04:38 -- common/autotest_common.sh@930 -- # kill -0 2079442 00:26:45.070 12:04:38 -- common/autotest_common.sh@931 -- # uname 00:26:45.070 12:04:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:45.070 12:04:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2079442 00:26:45.070 12:04:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:45.070 12:04:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:45.070 12:04:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2079442' 00:26:45.070 killing process with pid 2079442 00:26:45.070 12:04:38 -- common/autotest_common.sh@945 -- # kill 2079442 00:26:45.070 [2024-06-10 12:04:38.676559] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:45.070 12:04:38 -- common/autotest_common.sh@950 -- # wait 2079442 00:26:45.070 12:04:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:45.070 12:04:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:45.070 12:04:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:45.070 12:04:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.070 12:04:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:45.070 12:04:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.070 12:04:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.070 12:04:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.616 12:04:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:47.616 00:26:47.616 real 0m11.113s 00:26:47.616 user 0m7.887s 00:26:47.616 sys 0m5.766s 00:26:47.616 12:04:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.616 12:04:40 -- common/autotest_common.sh@10 -- # set +x 00:26:47.616 ************************************ 00:26:47.616 END TEST nvmf_identify 00:26:47.616 ************************************ 00:26:47.616 12:04:40 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:47.616 12:04:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:47.616 12:04:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:47.616 12:04:40 -- common/autotest_common.sh@10 -- # set +x 00:26:47.616 ************************************ 00:26:47.616 START TEST nvmf_perf 00:26:47.616 ************************************ 00:26:47.616 12:04:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:47.616 * Looking for test storage... 00:26:47.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.616 12:04:41 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.616 12:04:41 -- nvmf/common.sh@7 -- # uname -s 00:26:47.616 12:04:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.616 12:04:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.616 12:04:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.616 12:04:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.616 12:04:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.616 12:04:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.616 12:04:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.616 12:04:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.616 12:04:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.616 12:04:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.616 12:04:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:47.616 12:04:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:47.616 12:04:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.616 12:04:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.616 12:04:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.616 12:04:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.616 12:04:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.616 12:04:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.616 12:04:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.616 12:04:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.617 12:04:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.617 12:04:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.617 12:04:41 -- paths/export.sh@5 -- # export PATH 00:26:47.617 12:04:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.617 12:04:41 -- nvmf/common.sh@46 -- # : 0 00:26:47.617 12:04:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:47.617 12:04:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:47.617 12:04:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:47.617 12:04:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.617 12:04:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.617 12:04:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:47.617 12:04:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:47.617 12:04:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:47.617 12:04:41 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:47.617 12:04:41 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:47.617 12:04:41 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:47.617 12:04:41 -- host/perf.sh@17 -- # nvmftestinit 00:26:47.617 12:04:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:47.617 12:04:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.617 12:04:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:47.617 12:04:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:47.617 12:04:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:47.617 12:04:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.617 12:04:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.617 12:04:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.617 12:04:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:47.617 12:04:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:47.617 12:04:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:47.617 12:04:41 -- common/autotest_common.sh@10 -- # set +x 00:26:54.206 12:04:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:54.206 12:04:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:54.206 12:04:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:54.206 12:04:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:54.206 12:04:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:54.206 12:04:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:54.206 12:04:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:54.206 12:04:47 -- nvmf/common.sh@294 -- # net_devs=() 00:26:54.206 12:04:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:54.206 12:04:47 -- nvmf/common.sh@295 -- # e810=() 00:26:54.206 12:04:47 -- nvmf/common.sh@295 -- # local -ga e810 00:26:54.206 12:04:47 -- nvmf/common.sh@296 -- # x722=() 00:26:54.206 12:04:47 -- nvmf/common.sh@296 -- # local -ga x722 00:26:54.206 12:04:47 -- nvmf/common.sh@297 -- # mlx=() 00:26:54.206 12:04:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:54.206 12:04:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.206 12:04:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:54.206 12:04:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:54.206 12:04:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:54.206 12:04:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:54.206 12:04:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:54.206 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:54.206 12:04:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:54.206 12:04:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:54.206 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:54.206 12:04:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:54.206 12:04:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:54.206 12:04:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.206 12:04:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:54.206 12:04:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.206 12:04:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:54.206 Found net devices under 0000:31:00.0: cvl_0_0 00:26:54.206 12:04:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.206 12:04:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:54.206 12:04:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.206 12:04:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:54.206 12:04:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.206 12:04:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:54.206 Found net devices under 0000:31:00.1: cvl_0_1 00:26:54.206 12:04:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.206 12:04:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:54.206 12:04:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:54.206 12:04:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:54.206 12:04:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.206 12:04:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.206 12:04:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.206 12:04:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:54.206 12:04:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.206 12:04:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.206 12:04:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:54.206 12:04:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.206 12:04:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.206 12:04:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:54.206 12:04:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:54.206 12:04:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.206 12:04:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.206 12:04:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.206 12:04:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.206 12:04:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:54.206 12:04:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.206 12:04:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.206 12:04:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.206 12:04:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:54.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:26:54.206 00:26:54.206 --- 10.0.0.2 ping statistics --- 00:26:54.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.206 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:26:54.206 12:04:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:26:54.206 00:26:54.206 --- 10.0.0.1 ping statistics --- 00:26:54.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.206 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:26:54.206 12:04:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.206 12:04:47 -- nvmf/common.sh@410 -- # return 0 00:26:54.206 12:04:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:54.206 12:04:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.206 12:04:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:54.206 12:04:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.206 12:04:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:54.206 12:04:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:54.206 12:04:47 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:54.206 12:04:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:54.206 12:04:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:54.206 12:04:47 -- common/autotest_common.sh@10 -- # set +x 00:26:54.206 12:04:47 -- nvmf/common.sh@469 -- # nvmfpid=2083872 00:26:54.206 12:04:47 -- nvmf/common.sh@470 -- # waitforlisten 2083872 00:26:54.206 12:04:47 -- common/autotest_common.sh@819 -- # '[' -z 2083872 ']' 00:26:54.206 12:04:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.206 12:04:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:54.206 12:04:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.206 12:04:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:54.206 12:04:47 -- common/autotest_common.sh@10 -- # set +x 00:26:54.206 12:04:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:54.206 [2024-06-10 12:04:47.973815] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:54.206 [2024-06-10 12:04:47.973876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.467 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.467 [2024-06-10 12:04:48.044611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:54.467 [2024-06-10 12:04:48.118089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:54.467 [2024-06-10 12:04:48.118224] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.467 [2024-06-10 12:04:48.118234] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.467 [2024-06-10 12:04:48.118248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.467 [2024-06-10 12:04:48.118318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.467 [2024-06-10 12:04:48.118435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.467 [2024-06-10 12:04:48.118597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.467 [2024-06-10 12:04:48.118598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.038 12:04:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:55.038 12:04:48 -- common/autotest_common.sh@852 -- # return 0 00:26:55.038 12:04:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:55.038 12:04:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:55.038 12:04:48 -- common/autotest_common.sh@10 -- # set +x 00:26:55.038 12:04:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.038 12:04:48 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:55.038 12:04:48 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:55.609 12:04:49 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:55.609 12:04:49 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:55.870 12:04:49 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:55.870 12:04:49 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:55.870 12:04:49 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:55.870 12:04:49 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:55.870 12:04:49 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:55.870 12:04:49 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:55.870 12:04:49 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:56.131 [2024-06-10 12:04:49.744409] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.131 12:04:49 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.391 12:04:49 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:56.391 12:04:49 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:56.391 12:04:50 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:56.391 12:04:50 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:56.653 12:04:50 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.653 [2024-06-10 12:04:50.407015] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.913 12:04:50 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:56.913 12:04:50 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:56.913 12:04:50 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:56.913 12:04:50 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:56.913 12:04:50 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:58.298 Initializing NVMe Controllers 00:26:58.298 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:58.298 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:58.298 Initialization complete. Launching workers. 00:26:58.298 ======================================================== 00:26:58.298 Latency(us) 00:26:58.298 Device Information : IOPS MiB/s Average min max 00:26:58.298 PCIE (0000:65:00.0) NSID 1 from core 0: 81207.33 317.22 393.51 13.35 4758.18 00:26:58.298 ======================================================== 00:26:58.298 Total : 81207.33 317.22 393.51 13.35 4758.18 00:26:58.298 00:26:58.298 12:04:51 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:58.298 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.678 Initializing NVMe Controllers 00:26:59.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:59.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:59.678 Initialization complete. Launching workers. 00:26:59.678 ======================================================== 00:26:59.678 Latency(us) 00:26:59.678 Device Information : IOPS MiB/s Average min max 00:26:59.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 118.00 0.46 8676.98 387.00 44918.91 00:26:59.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 47.00 0.18 21586.84 5015.12 48862.66 00:26:59.678 ======================================================== 00:26:59.678 Total : 165.00 0.64 12354.33 387.00 48862.66 00:26:59.678 00:26:59.679 12:04:53 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:59.679 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.064 Initializing NVMe Controllers 00:27:01.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:01.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:01.064 Initialization complete. Launching workers. 00:27:01.064 ======================================================== 00:27:01.064 Latency(us) 00:27:01.064 Device Information : IOPS MiB/s Average min max 00:27:01.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10256.69 40.07 3120.51 400.13 6728.31 00:27:01.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3916.88 15.30 8219.73 5228.02 17589.83 00:27:01.064 ======================================================== 00:27:01.064 Total : 14173.57 55.37 4529.68 400.13 17589.83 00:27:01.064 00:27:01.064 12:04:54 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:01.064 12:04:54 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:01.064 12:04:54 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.064 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.609 Initializing NVMe Controllers 00:27:03.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:03.609 Controller IO queue size 128, less than required. 00:27:03.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:03.609 Controller IO queue size 128, less than required. 00:27:03.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:03.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:03.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:03.609 Initialization complete. Launching workers. 00:27:03.609 ======================================================== 00:27:03.609 Latency(us) 00:27:03.609 Device Information : IOPS MiB/s Average min max 00:27:03.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 972.90 243.23 133677.49 73917.99 234454.70 00:27:03.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 558.58 139.65 241302.74 62338.85 398932.91 00:27:03.609 ======================================================== 00:27:03.609 Total : 1531.48 382.87 172931.94 62338.85 398932.91 00:27:03.609 00:27:03.609 12:04:56 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:03.609 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.609 No valid NVMe controllers or AIO or URING devices found 00:27:03.609 Initializing NVMe Controllers 00:27:03.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:03.609 Controller IO queue size 128, less than required. 00:27:03.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:03.609 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:03.609 Controller IO queue size 128, less than required. 00:27:03.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:03.609 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:03.609 WARNING: Some requested NVMe devices were skipped 00:27:03.609 12:04:57 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:03.609 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.155 Initializing NVMe Controllers 00:27:06.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:06.155 Controller IO queue size 128, less than required. 00:27:06.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:06.155 Controller IO queue size 128, less than required. 00:27:06.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:06.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:06.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:06.155 Initialization complete. Launching workers. 00:27:06.155 00:27:06.155 ==================== 00:27:06.155 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:06.155 TCP transport: 00:27:06.155 polls: 43772 00:27:06.155 idle_polls: 15154 00:27:06.155 sock_completions: 28618 00:27:06.155 nvme_completions: 3443 00:27:06.155 submitted_requests: 5265 00:27:06.155 queued_requests: 1 00:27:06.155 00:27:06.155 ==================== 00:27:06.155 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:06.155 TCP transport: 00:27:06.155 polls: 41000 00:27:06.155 idle_polls: 12274 00:27:06.155 sock_completions: 28726 00:27:06.155 nvme_completions: 3694 00:27:06.155 submitted_requests: 5716 00:27:06.155 queued_requests: 1 00:27:06.155 ======================================================== 00:27:06.155 Latency(us) 00:27:06.155 Device Information : IOPS MiB/s Average min max 00:27:06.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 924.50 231.12 142805.98 75833.12 209435.37 00:27:06.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 987.00 246.75 134025.69 68044.31 239557.28 00:27:06.155 ======================================================== 00:27:06.155 Total : 1911.50 477.88 138272.29 68044.31 239557.28 00:27:06.155 00:27:06.155 12:04:59 -- host/perf.sh@66 -- # sync 00:27:06.155 12:04:59 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.416 12:04:59 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:06.416 12:04:59 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:27:06.416 12:04:59 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:07.358 12:05:01 -- host/perf.sh@72 -- # ls_guid=c9ccb716-e31f-4e2e-93ec-f53bc507dfcc 00:27:07.358 12:05:01 -- host/perf.sh@73 -- # get_lvs_free_mb c9ccb716-e31f-4e2e-93ec-f53bc507dfcc 00:27:07.359 12:05:01 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c9ccb716-e31f-4e2e-93ec-f53bc507dfcc 00:27:07.359 12:05:01 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:07.359 12:05:01 -- common/autotest_common.sh@1345 -- # local fc 00:27:07.359 12:05:01 -- common/autotest_common.sh@1346 -- # local cs 00:27:07.359 12:05:01 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:07.619 12:05:01 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:07.619 { 00:27:07.619 "uuid": "c9ccb716-e31f-4e2e-93ec-f53bc507dfcc", 00:27:07.619 "name": "lvs_0", 00:27:07.619 "base_bdev": "Nvme0n1", 00:27:07.619 "total_data_clusters": 457407, 00:27:07.619 "free_clusters": 457407, 00:27:07.619 "block_size": 512, 00:27:07.619 "cluster_size": 4194304 00:27:07.619 } 00:27:07.619 ]' 00:27:07.619 12:05:01 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c9ccb716-e31f-4e2e-93ec-f53bc507dfcc") .free_clusters' 00:27:07.619 12:05:01 -- common/autotest_common.sh@1348 -- # fc=457407 00:27:07.619 12:05:01 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c9ccb716-e31f-4e2e-93ec-f53bc507dfcc") .cluster_size' 00:27:07.619 12:05:01 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:07.619 12:05:01 -- common/autotest_common.sh@1352 -- # free_mb=1829628 00:27:07.619 12:05:01 -- common/autotest_common.sh@1353 -- # echo 1829628 00:27:07.619 1829628 00:27:07.619 12:05:01 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:27:07.619 12:05:01 -- host/perf.sh@78 -- # free_mb=20480 00:27:07.619 12:05:01 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c9ccb716-e31f-4e2e-93ec-f53bc507dfcc lbd_0 20480 00:27:07.880 12:05:01 -- host/perf.sh@80 -- # lb_guid=4e76c097-9c51-42f2-a8eb-516f100158f2 00:27:07.880 12:05:01 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4e76c097-9c51-42f2-a8eb-516f100158f2 lvs_n_0 00:27:09.806 12:05:03 -- host/perf.sh@83 -- # ls_nested_guid=3d8e0602-6bd3-4c09-bf6a-9825b3a4d6c0 00:27:09.806 12:05:03 -- host/perf.sh@84 -- # get_lvs_free_mb 3d8e0602-6bd3-4c09-bf6a-9825b3a4d6c0 00:27:09.806 12:05:03 -- common/autotest_common.sh@1343 -- # local lvs_uuid=3d8e0602-6bd3-4c09-bf6a-9825b3a4d6c0 00:27:09.806 12:05:03 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:09.806 12:05:03 -- common/autotest_common.sh@1345 -- # local fc 00:27:09.806 12:05:03 -- common/autotest_common.sh@1346 -- # local cs 00:27:09.806 12:05:03 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:09.806 12:05:03 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:09.807 { 00:27:09.807 "uuid": "c9ccb716-e31f-4e2e-93ec-f53bc507dfcc", 00:27:09.807 "name": "lvs_0", 00:27:09.807 "base_bdev": "Nvme0n1", 00:27:09.807 "total_data_clusters": 457407, 00:27:09.807 "free_clusters": 452287, 00:27:09.807 "block_size": 512, 00:27:09.807 "cluster_size": 4194304 00:27:09.807 }, 00:27:09.807 { 00:27:09.807 "uuid": "3d8e0602-6bd3-4c09-bf6a-9825b3a4d6c0", 00:27:09.807 "name": "lvs_n_0", 00:27:09.807 "base_bdev": "4e76c097-9c51-42f2-a8eb-516f100158f2", 00:27:09.807 "total_data_clusters": 5114, 00:27:09.807 "free_clusters": 5114, 00:27:09.807 "block_size": 512, 00:27:09.807 "cluster_size": 4194304 00:27:09.807 } 00:27:09.807 ]' 00:27:09.807 12:05:03 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="3d8e0602-6bd3-4c09-bf6a-9825b3a4d6c0") .free_clusters' 00:27:09.807 12:05:03 -- common/autotest_common.sh@1348 -- # fc=5114 00:27:09.807 12:05:03 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="3d8e0602-6bd3-4c09-bf6a-9825b3a4d6c0") .cluster_size' 00:27:09.807 12:05:03 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:09.807 12:05:03 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:27:09.807 12:05:03 -- common/autotest_common.sh@1353 -- # echo 20456 00:27:09.807 20456 00:27:09.807 12:05:03 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:09.807 12:05:03 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3d8e0602-6bd3-4c09-bf6a-9825b3a4d6c0 lbd_nest_0 20456 00:27:09.807 12:05:03 -- host/perf.sh@88 -- # lb_nested_guid=ef6bcb5b-bfc8-4715-8f21-239e6a55fe7a 00:27:09.807 12:05:03 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.067 12:05:03 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:10.067 12:05:03 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ef6bcb5b-bfc8-4715-8f21-239e6a55fe7a 00:27:10.067 12:05:03 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.327 12:05:03 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:10.327 12:05:03 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:10.327 12:05:03 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:10.327 12:05:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:10.327 12:05:03 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.327 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.556 Initializing NVMe Controllers 00:27:22.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:22.556 Initialization complete. Launching workers. 00:27:22.556 ======================================================== 00:27:22.556 Latency(us) 00:27:22.556 Device Information : IOPS MiB/s Average min max 00:27:22.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.40 0.02 20744.42 217.53 49466.99 00:27:22.556 ======================================================== 00:27:22.556 Total : 48.40 0.02 20744.42 217.53 49466.99 00:27:22.556 00:27:22.556 12:05:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:22.556 12:05:14 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:22.556 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.613 Initializing NVMe Controllers 00:27:32.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:32.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:32.613 Initialization complete. Launching workers. 00:27:32.613 ======================================================== 00:27:32.613 Latency(us) 00:27:32.613 Device Information : IOPS MiB/s Average min max 00:27:32.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 66.50 8.31 15068.03 7977.03 51878.94 00:27:32.613 ======================================================== 00:27:32.613 Total : 66.50 8.31 15068.03 7977.03 51878.94 00:27:32.613 00:27:32.613 12:05:24 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:32.613 12:05:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:32.613 12:05:24 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:32.613 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.609 Initializing NVMe Controllers 00:27:42.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:42.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:42.609 Initialization complete. Launching workers. 00:27:42.609 ======================================================== 00:27:42.609 Latency(us) 00:27:42.609 Device Information : IOPS MiB/s Average min max 00:27:42.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9138.17 4.46 3511.00 259.86 43289.00 00:27:42.609 ======================================================== 00:27:42.609 Total : 9138.17 4.46 3511.00 259.86 43289.00 00:27:42.609 00:27:42.609 12:05:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:42.609 12:05:35 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.609 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.608 Initializing NVMe Controllers 00:27:52.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:52.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:52.608 Initialization complete. Launching workers. 00:27:52.608 ======================================================== 00:27:52.608 Latency(us) 00:27:52.608 Device Information : IOPS MiB/s Average min max 00:27:52.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2072.68 259.09 15439.41 1158.57 33435.46 00:27:52.608 ======================================================== 00:27:52.608 Total : 2072.68 259.09 15439.41 1158.57 33435.46 00:27:52.608 00:27:52.608 12:05:45 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:52.609 12:05:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:52.609 12:05:45 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:52.609 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.611 Initializing NVMe Controllers 00:28:02.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.611 Controller IO queue size 128, less than required. 00:28:02.612 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:02.612 Initialization complete. Launching workers. 00:28:02.612 ======================================================== 00:28:02.612 Latency(us) 00:28:02.612 Device Information : IOPS MiB/s Average min max 00:28:02.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15782.38 7.71 8110.54 2019.85 48860.83 00:28:02.612 ======================================================== 00:28:02.612 Total : 15782.38 7.71 8110.54 2019.85 48860.83 00:28:02.612 00:28:02.612 12:05:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:02.612 12:05:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.612 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.669 Initializing NVMe Controllers 00:28:12.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:12.669 Controller IO queue size 128, less than required. 00:28:12.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:12.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:12.669 Initialization complete. Launching workers. 00:28:12.669 ======================================================== 00:28:12.669 Latency(us) 00:28:12.669 Device Information : IOPS MiB/s Average min max 00:28:12.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1148.80 143.60 112095.59 15877.81 239473.88 00:28:12.669 ======================================================== 00:28:12.669 Total : 1148.80 143.60 112095.59 15877.81 239473.88 00:28:12.669 00:28:12.669 12:06:06 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.669 12:06:06 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ef6bcb5b-bfc8-4715-8f21-239e6a55fe7a 00:28:14.583 12:06:07 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:14.583 12:06:08 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4e76c097-9c51-42f2-a8eb-516f100158f2 00:28:14.583 12:06:08 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:14.844 12:06:08 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:14.844 12:06:08 -- host/perf.sh@114 -- # nvmftestfini 00:28:14.844 12:06:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:14.844 12:06:08 -- nvmf/common.sh@116 -- # sync 00:28:14.844 12:06:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:14.844 12:06:08 -- nvmf/common.sh@119 -- # set +e 00:28:14.844 12:06:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:14.844 12:06:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:14.844 rmmod nvme_tcp 00:28:14.844 rmmod nvme_fabrics 00:28:14.844 rmmod nvme_keyring 00:28:14.844 12:06:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:14.844 12:06:08 -- nvmf/common.sh@123 -- # set -e 00:28:14.844 12:06:08 -- nvmf/common.sh@124 -- # return 0 00:28:14.844 12:06:08 -- nvmf/common.sh@477 -- # '[' -n 2083872 ']' 00:28:14.844 12:06:08 -- nvmf/common.sh@478 -- # killprocess 2083872 00:28:14.844 12:06:08 -- common/autotest_common.sh@926 -- # '[' -z 2083872 ']' 00:28:14.844 12:06:08 -- common/autotest_common.sh@930 -- # kill -0 2083872 00:28:14.844 12:06:08 -- common/autotest_common.sh@931 -- # uname 00:28:14.844 12:06:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:14.844 12:06:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2083872 00:28:14.844 12:06:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:14.844 12:06:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:14.844 12:06:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2083872' 00:28:14.844 killing process with pid 2083872 00:28:14.844 12:06:08 -- common/autotest_common.sh@945 -- # kill 2083872 00:28:14.844 12:06:08 -- common/autotest_common.sh@950 -- # wait 2083872 00:28:16.760 12:06:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:16.760 12:06:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:16.760 12:06:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:16.760 12:06:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.760 12:06:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:16.760 12:06:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.760 12:06:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.760 12:06:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.307 12:06:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:19.307 00:28:19.307 real 1m31.605s 00:28:19.307 user 5m25.337s 00:28:19.307 sys 0m13.602s 00:28:19.307 12:06:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.307 12:06:12 -- common/autotest_common.sh@10 -- # set +x 00:28:19.307 ************************************ 00:28:19.307 END TEST nvmf_perf 00:28:19.307 ************************************ 00:28:19.307 12:06:12 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:19.307 12:06:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:19.307 12:06:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:19.307 12:06:12 -- common/autotest_common.sh@10 -- # set +x 00:28:19.307 ************************************ 00:28:19.307 START TEST nvmf_fio_host 00:28:19.307 ************************************ 00:28:19.307 12:06:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:19.307 * Looking for test storage... 00:28:19.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.307 12:06:12 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.307 12:06:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.307 12:06:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.307 12:06:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.307 12:06:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.307 12:06:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.307 12:06:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.307 12:06:12 -- paths/export.sh@5 -- # export PATH 00:28:19.307 12:06:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.307 12:06:12 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.307 12:06:12 -- nvmf/common.sh@7 -- # uname -s 00:28:19.307 12:06:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.307 12:06:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.307 12:06:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.307 12:06:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.307 12:06:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.307 12:06:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.307 12:06:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.307 12:06:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.307 12:06:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.307 12:06:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.307 12:06:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:19.307 12:06:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:19.307 12:06:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.307 12:06:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.307 12:06:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.307 12:06:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.307 12:06:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.307 12:06:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.307 12:06:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.307 12:06:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.308 12:06:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.308 12:06:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.308 12:06:12 -- paths/export.sh@5 -- # export PATH 00:28:19.308 12:06:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.308 12:06:12 -- nvmf/common.sh@46 -- # : 0 00:28:19.308 12:06:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:19.308 12:06:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:19.308 12:06:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:19.308 12:06:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.308 12:06:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.308 12:06:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:19.308 12:06:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:19.308 12:06:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:19.308 12:06:12 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:19.308 12:06:12 -- host/fio.sh@14 -- # nvmftestinit 00:28:19.308 12:06:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:19.308 12:06:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.308 12:06:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:19.308 12:06:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:19.308 12:06:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:19.308 12:06:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.308 12:06:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.308 12:06:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.308 12:06:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:19.308 12:06:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:19.308 12:06:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:19.308 12:06:12 -- common/autotest_common.sh@10 -- # set +x 00:28:25.898 12:06:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:25.898 12:06:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:25.898 12:06:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:25.898 12:06:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:25.898 12:06:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:25.898 12:06:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:25.898 12:06:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:25.898 12:06:19 -- nvmf/common.sh@294 -- # net_devs=() 00:28:25.898 12:06:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:25.898 12:06:19 -- nvmf/common.sh@295 -- # e810=() 00:28:25.898 12:06:19 -- nvmf/common.sh@295 -- # local -ga e810 00:28:25.898 12:06:19 -- nvmf/common.sh@296 -- # x722=() 00:28:25.898 12:06:19 -- nvmf/common.sh@296 -- # local -ga x722 00:28:25.898 12:06:19 -- nvmf/common.sh@297 -- # mlx=() 00:28:25.898 12:06:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:25.898 12:06:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.898 12:06:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:25.898 12:06:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:25.898 12:06:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:25.898 12:06:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:25.898 12:06:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:25.898 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:25.898 12:06:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:25.898 12:06:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:25.898 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:25.898 12:06:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:25.898 12:06:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:25.898 12:06:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.898 12:06:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:25.898 12:06:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.898 12:06:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:25.898 Found net devices under 0000:31:00.0: cvl_0_0 00:28:25.898 12:06:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.898 12:06:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:25.898 12:06:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.898 12:06:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:25.898 12:06:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.898 12:06:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:25.898 Found net devices under 0000:31:00.1: cvl_0_1 00:28:25.898 12:06:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.898 12:06:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:25.898 12:06:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:25.898 12:06:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:25.898 12:06:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:25.898 12:06:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.898 12:06:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.898 12:06:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.898 12:06:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:25.898 12:06:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.898 12:06:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.898 12:06:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:25.898 12:06:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.898 12:06:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.898 12:06:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:25.898 12:06:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:25.898 12:06:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.898 12:06:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.159 12:06:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.159 12:06:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.159 12:06:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:26.159 12:06:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.159 12:06:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.159 12:06:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.159 12:06:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:26.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:28:26.159 00:28:26.159 --- 10.0.0.2 ping statistics --- 00:28:26.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.159 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:28:26.159 12:06:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:28:26.159 00:28:26.159 --- 10.0.0.1 ping statistics --- 00:28:26.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.159 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:28:26.159 12:06:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.159 12:06:19 -- nvmf/common.sh@410 -- # return 0 00:28:26.159 12:06:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:26.159 12:06:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.159 12:06:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:26.159 12:06:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:26.159 12:06:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.159 12:06:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:26.159 12:06:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:26.159 12:06:19 -- host/fio.sh@16 -- # [[ y != y ]] 00:28:26.420 12:06:19 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:26.420 12:06:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:26.420 12:06:19 -- common/autotest_common.sh@10 -- # set +x 00:28:26.420 12:06:19 -- host/fio.sh@24 -- # nvmfpid=2104618 00:28:26.420 12:06:19 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:26.420 12:06:19 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:26.420 12:06:19 -- host/fio.sh@28 -- # waitforlisten 2104618 00:28:26.420 12:06:19 -- common/autotest_common.sh@819 -- # '[' -z 2104618 ']' 00:28:26.420 12:06:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.420 12:06:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:26.420 12:06:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.420 12:06:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:26.420 12:06:19 -- common/autotest_common.sh@10 -- # set +x 00:28:26.420 [2024-06-10 12:06:19.989464] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:26.420 [2024-06-10 12:06:19.989532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.420 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.420 [2024-06-10 12:06:20.064523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.420 [2024-06-10 12:06:20.130712] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:26.420 [2024-06-10 12:06:20.130848] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.420 [2024-06-10 12:06:20.130858] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.420 [2024-06-10 12:06:20.130866] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.420 [2024-06-10 12:06:20.131040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.420 [2024-06-10 12:06:20.131054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.420 [2024-06-10 12:06:20.131196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.420 [2024-06-10 12:06:20.131197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.362 12:06:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:27.362 12:06:20 -- common/autotest_common.sh@852 -- # return 0 00:28:27.362 12:06:20 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:27.362 [2024-06-10 12:06:20.953871] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.362 12:06:20 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:27.362 12:06:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:27.362 12:06:20 -- common/autotest_common.sh@10 -- # set +x 00:28:27.362 12:06:21 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:27.622 Malloc1 00:28:27.623 12:06:21 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:27.623 12:06:21 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:27.883 12:06:21 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.144 [2024-06-10 12:06:21.663443] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.144 12:06:21 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:28.144 12:06:21 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:28.144 12:06:21 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:28.144 12:06:21 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:28.144 12:06:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:28.144 12:06:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:28.144 12:06:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:28.144 12:06:21 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:28.144 12:06:21 -- common/autotest_common.sh@1320 -- # shift 00:28:28.144 12:06:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:28.144 12:06:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:28.144 12:06:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:28.144 12:06:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:28.144 12:06:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:28.144 12:06:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:28.144 12:06:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:28.144 12:06:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:28.144 12:06:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:28.144 12:06:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:28.144 12:06:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:28.144 12:06:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:28.144 12:06:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:28.144 12:06:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:28.144 12:06:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:28.737 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:28.737 fio-3.35 00:28:28.737 Starting 1 thread 00:28:28.737 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.349 00:28:31.349 test: (groupid=0, jobs=1): err= 0: pid=2105314: Mon Jun 10 12:06:24 2024 00:28:31.349 read: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(102MiB/2004msec) 00:28:31.349 slat (usec): min=2, max=272, avg= 2.15, stdev= 2.45 00:28:31.349 clat (usec): min=3433, max=8412, avg=5445.02, stdev=1050.25 00:28:31.349 lat (usec): min=3435, max=8414, avg=5447.17, stdev=1050.29 00:28:31.349 clat percentiles (usec): 00:28:31.349 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4621], 00:28:31.349 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 5014], 60.00th=[ 5211], 00:28:31.349 | 70.00th=[ 5932], 80.00th=[ 6718], 90.00th=[ 7111], 95.00th=[ 7373], 00:28:31.349 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[ 8225], 99.95th=[ 8291], 00:28:31.349 | 99.99th=[ 8356] 00:28:31.349 bw ( KiB/s): min=39840, max=59392, per=99.95%, avg=51934.00, stdev=9263.13, samples=4 00:28:31.349 iops : min= 9960, max=14848, avg=12983.50, stdev=2315.78, samples=4 00:28:31.349 write: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(102MiB/2004msec); 0 zone resets 00:28:31.349 slat (usec): min=2, max=265, avg= 2.25, stdev= 1.81 00:28:31.349 clat (usec): min=2587, max=7488, avg=4357.43, stdev=842.05 00:28:31.349 lat (usec): min=2590, max=7490, avg=4359.68, stdev=842.11 00:28:31.349 clat percentiles (usec): 00:28:31.349 | 1.00th=[ 3163], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3687], 00:28:31.349 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 4015], 60.00th=[ 4146], 00:28:31.349 | 70.00th=[ 4752], 80.00th=[ 5407], 90.00th=[ 5669], 95.00th=[ 5932], 00:28:31.349 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 6652], 99.95th=[ 6849], 00:28:31.349 | 99.99th=[ 7439] 00:28:31.349 bw ( KiB/s): min=40528, max=59520, per=99.96%, avg=51892.00, stdev=9142.22, samples=4 00:28:31.349 iops : min=10132, max=14880, avg=12973.00, stdev=2285.55, samples=4 00:28:31.349 lat (msec) : 4=25.31%, 10=74.69% 00:28:31.349 cpu : usr=67.10%, sys=28.61%, ctx=58, majf=0, minf=6 00:28:31.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:31.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:31.349 issued rwts: total=26032,26008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:31.349 00:28:31.349 Run status group 0 (all jobs): 00:28:31.349 READ: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=102MiB (107MB), run=2004-2004msec 00:28:31.349 WRITE: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=102MiB (107MB), run=2004-2004msec 00:28:31.349 12:06:24 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:31.349 12:06:24 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:31.349 12:06:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:31.349 12:06:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:31.349 12:06:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:31.349 12:06:24 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:31.349 12:06:24 -- common/autotest_common.sh@1320 -- # shift 00:28:31.349 12:06:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:31.349 12:06:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.349 12:06:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:31.349 12:06:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:31.349 12:06:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:31.349 12:06:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:31.349 12:06:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:31.349 12:06:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.349 12:06:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:31.349 12:06:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:31.349 12:06:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:31.349 12:06:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:31.349 12:06:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:31.349 12:06:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:31.349 12:06:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:31.349 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:31.349 fio-3.35 00:28:31.349 Starting 1 thread 00:28:31.349 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.896 00:28:33.896 test: (groupid=0, jobs=1): err= 0: pid=2106033: Mon Jun 10 12:06:27 2024 00:28:33.896 read: IOPS=8995, BW=141MiB/s (147MB/s)(282MiB/2005msec) 00:28:33.896 slat (usec): min=3, max=111, avg= 3.67, stdev= 1.85 00:28:33.896 clat (usec): min=1421, max=20768, avg=8758.85, stdev=2234.80 00:28:33.896 lat (usec): min=1425, max=20771, avg=8762.52, stdev=2235.06 00:28:33.896 clat percentiles (usec): 00:28:33.896 | 1.00th=[ 4359], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6783], 00:28:33.896 | 30.00th=[ 7439], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9110], 00:28:33.897 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11994], 95.00th=[12518], 00:28:33.897 | 99.00th=[14484], 99.50th=[14877], 99.90th=[16712], 99.95th=[17171], 00:28:33.897 | 99.99th=[17695] 00:28:33.897 bw ( KiB/s): min=66656, max=81312, per=49.15%, avg=70744.00, stdev=7068.28, samples=4 00:28:33.897 iops : min= 4166, max= 5082, avg=4421.50, stdev=441.77, samples=4 00:28:33.897 write: IOPS=5207, BW=81.4MiB/s (85.3MB/s)(144MiB/1770msec); 0 zone resets 00:28:33.897 slat (usec): min=39, max=394, avg=41.30, stdev= 8.54 00:28:33.897 clat (usec): min=2812, max=16355, avg=9541.52, stdev=1635.05 00:28:33.897 lat (usec): min=2852, max=16492, avg=9582.82, stdev=1637.46 00:28:33.897 clat percentiles (usec): 00:28:33.897 | 1.00th=[ 6259], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8160], 00:28:33.897 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:28:33.897 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[12387], 00:28:33.897 | 99.00th=[14615], 99.50th=[15664], 99.90th=[15926], 99.95th=[16057], 00:28:33.897 | 99.99th=[16319] 00:28:33.897 bw ( KiB/s): min=68992, max=84896, per=87.91%, avg=73256.00, stdev=7775.99, samples=4 00:28:33.897 iops : min= 4312, max= 5306, avg=4578.50, stdev=486.00, samples=4 00:28:33.897 lat (msec) : 2=0.04%, 4=0.39%, 10=70.00%, 20=29.56%, 50=0.01% 00:28:33.897 cpu : usr=81.84%, sys=14.77%, ctx=12, majf=0, minf=11 00:28:33.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:33.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:33.897 issued rwts: total=18036,9218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:33.897 00:28:33.897 Run status group 0 (all jobs): 00:28:33.897 READ: bw=141MiB/s (147MB/s), 141MiB/s-141MiB/s (147MB/s-147MB/s), io=282MiB (296MB), run=2005-2005msec 00:28:33.897 WRITE: bw=81.4MiB/s (85.3MB/s), 81.4MiB/s-81.4MiB/s (85.3MB/s-85.3MB/s), io=144MiB (151MB), run=1770-1770msec 00:28:33.897 12:06:27 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:34.158 12:06:27 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:34.158 12:06:27 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:34.158 12:06:27 -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:34.158 12:06:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:34.158 12:06:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:34.158 12:06:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:34.158 12:06:27 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:34.158 12:06:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:34.158 12:06:27 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:34.158 12:06:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:28:34.158 12:06:27 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:28:34.419 Nvme0n1 00:28:34.680 12:06:28 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:35.252 12:06:28 -- host/fio.sh@53 -- # ls_guid=1aba1d62-c64e-4245-82f6-58aa9798b283 00:28:35.252 12:06:28 -- host/fio.sh@54 -- # get_lvs_free_mb 1aba1d62-c64e-4245-82f6-58aa9798b283 00:28:35.252 12:06:28 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1aba1d62-c64e-4245-82f6-58aa9798b283 00:28:35.252 12:06:28 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:35.252 12:06:28 -- common/autotest_common.sh@1345 -- # local fc 00:28:35.252 12:06:28 -- common/autotest_common.sh@1346 -- # local cs 00:28:35.252 12:06:28 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:35.252 12:06:28 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:35.252 { 00:28:35.252 "uuid": "1aba1d62-c64e-4245-82f6-58aa9798b283", 00:28:35.252 "name": "lvs_0", 00:28:35.252 "base_bdev": "Nvme0n1", 00:28:35.252 "total_data_clusters": 1787, 00:28:35.252 "free_clusters": 1787, 00:28:35.252 "block_size": 512, 00:28:35.252 "cluster_size": 1073741824 00:28:35.252 } 00:28:35.252 ]' 00:28:35.252 12:06:28 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1aba1d62-c64e-4245-82f6-58aa9798b283") .free_clusters' 00:28:35.252 12:06:28 -- common/autotest_common.sh@1348 -- # fc=1787 00:28:35.252 12:06:28 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1aba1d62-c64e-4245-82f6-58aa9798b283") .cluster_size' 00:28:35.513 12:06:29 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:28:35.513 12:06:29 -- common/autotest_common.sh@1352 -- # free_mb=1829888 00:28:35.513 12:06:29 -- common/autotest_common.sh@1353 -- # echo 1829888 00:28:35.513 1829888 00:28:35.513 12:06:29 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:28:35.513 3893c28c-29f3-4b27-8456-c58da5b74ac1 00:28:35.513 12:06:29 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:35.773 12:06:29 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:35.773 12:06:29 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:36.035 12:06:29 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:36.035 12:06:29 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:36.035 12:06:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:36.035 12:06:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:36.035 12:06:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:36.035 12:06:29 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:36.035 12:06:29 -- common/autotest_common.sh@1320 -- # shift 00:28:36.035 12:06:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:36.035 12:06:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.035 12:06:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:36.035 12:06:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:36.035 12:06:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:36.035 12:06:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:36.035 12:06:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:36.035 12:06:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.035 12:06:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:36.035 12:06:29 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:36.035 12:06:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:36.035 12:06:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:36.035 12:06:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:36.035 12:06:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:36.035 12:06:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:36.295 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:36.295 fio-3.35 00:28:36.295 Starting 1 thread 00:28:36.557 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.098 00:28:39.098 test: (groupid=0, jobs=1): err= 0: pid=2107148: Mon Jun 10 12:06:32 2024 00:28:39.098 read: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(83.0MiB/2005msec) 00:28:39.098 slat (usec): min=2, max=113, avg= 2.21, stdev= 1.03 00:28:39.098 clat (usec): min=2774, max=10617, avg=6683.15, stdev=502.66 00:28:39.098 lat (usec): min=2791, max=10619, avg=6685.36, stdev=502.60 00:28:39.098 clat percentiles (usec): 00:28:39.098 | 1.00th=[ 5538], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:28:39.098 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6783], 00:28:39.098 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7504], 00:28:39.098 | 99.00th=[ 7832], 99.50th=[ 7898], 99.90th=[ 9110], 99.95th=[ 9765], 00:28:39.098 | 99.99th=[10552] 00:28:39.098 bw ( KiB/s): min=41016, max=42984, per=99.91%, avg=42346.00, stdev=908.21, samples=4 00:28:39.098 iops : min=10254, max=10746, avg=10586.50, stdev=227.05, samples=4 00:28:39.098 write: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(82.9MiB/2005msec); 0 zone resets 00:28:39.098 slat (nsec): min=2121, max=95373, avg=2312.04, stdev=700.57 00:28:39.098 clat (usec): min=1198, max=9787, avg=5340.75, stdev=438.49 00:28:39.098 lat (usec): min=1205, max=9789, avg=5343.06, stdev=438.48 00:28:39.098 clat percentiles (usec): 00:28:39.098 | 1.00th=[ 4359], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 5014], 00:28:39.098 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5473], 00:28:39.098 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 5997], 00:28:39.098 | 99.00th=[ 6325], 99.50th=[ 6390], 99.90th=[ 7767], 99.95th=[ 9110], 00:28:39.098 | 99.99th=[ 9765] 00:28:39.098 bw ( KiB/s): min=41640, max=42856, per=99.99%, avg=42356.00, stdev=512.52, samples=4 00:28:39.098 iops : min=10410, max=10714, avg=10589.00, stdev=128.13, samples=4 00:28:39.098 lat (msec) : 2=0.01%, 4=0.12%, 10=99.86%, 20=0.01% 00:28:39.098 cpu : usr=68.06%, sys=28.29%, ctx=27, majf=0, minf=6 00:28:39.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:39.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:39.098 issued rwts: total=21245,21233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:39.098 00:28:39.098 Run status group 0 (all jobs): 00:28:39.098 READ: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=83.0MiB (87.0MB), run=2005-2005msec 00:28:39.098 WRITE: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=82.9MiB (87.0MB), run=2005-2005msec 00:28:39.098 12:06:32 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:39.098 12:06:32 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:40.039 12:06:33 -- host/fio.sh@64 -- # ls_nested_guid=5338a18e-3625-4456-b18c-d920f8cca234 00:28:40.039 12:06:33 -- host/fio.sh@65 -- # get_lvs_free_mb 5338a18e-3625-4456-b18c-d920f8cca234 00:28:40.039 12:06:33 -- common/autotest_common.sh@1343 -- # local lvs_uuid=5338a18e-3625-4456-b18c-d920f8cca234 00:28:40.039 12:06:33 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:40.039 12:06:33 -- common/autotest_common.sh@1345 -- # local fc 00:28:40.039 12:06:33 -- common/autotest_common.sh@1346 -- # local cs 00:28:40.039 12:06:33 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:40.039 12:06:33 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:40.039 { 00:28:40.039 "uuid": "1aba1d62-c64e-4245-82f6-58aa9798b283", 00:28:40.039 "name": "lvs_0", 00:28:40.039 "base_bdev": "Nvme0n1", 00:28:40.039 "total_data_clusters": 1787, 00:28:40.039 "free_clusters": 0, 00:28:40.039 "block_size": 512, 00:28:40.039 "cluster_size": 1073741824 00:28:40.039 }, 00:28:40.039 { 00:28:40.039 "uuid": "5338a18e-3625-4456-b18c-d920f8cca234", 00:28:40.039 "name": "lvs_n_0", 00:28:40.039 "base_bdev": "3893c28c-29f3-4b27-8456-c58da5b74ac1", 00:28:40.039 "total_data_clusters": 457025, 00:28:40.039 "free_clusters": 457025, 00:28:40.039 "block_size": 512, 00:28:40.039 "cluster_size": 4194304 00:28:40.039 } 00:28:40.039 ]' 00:28:40.039 12:06:33 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="5338a18e-3625-4456-b18c-d920f8cca234") .free_clusters' 00:28:40.039 12:06:33 -- common/autotest_common.sh@1348 -- # fc=457025 00:28:40.039 12:06:33 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="5338a18e-3625-4456-b18c-d920f8cca234") .cluster_size' 00:28:40.039 12:06:33 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:40.039 12:06:33 -- common/autotest_common.sh@1352 -- # free_mb=1828100 00:28:40.039 12:06:33 -- common/autotest_common.sh@1353 -- # echo 1828100 00:28:40.039 1828100 00:28:40.039 12:06:33 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:28:40.979 098d6804-bfd7-415a-a24c-a42dc62e84c8 00:28:40.979 12:06:34 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:41.240 12:06:34 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:41.502 12:06:35 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:41.502 12:06:35 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:41.502 12:06:35 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:41.502 12:06:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:41.502 12:06:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:41.502 12:06:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:41.502 12:06:35 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.502 12:06:35 -- common/autotest_common.sh@1320 -- # shift 00:28:41.502 12:06:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:41.502 12:06:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.502 12:06:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.502 12:06:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:41.502 12:06:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:41.502 12:06:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:41.502 12:06:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:41.502 12:06:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.502 12:06:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.502 12:06:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:41.502 12:06:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:41.502 12:06:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:41.502 12:06:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:41.502 12:06:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:41.502 12:06:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:42.090 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:42.090 fio-3.35 00:28:42.090 Starting 1 thread 00:28:42.090 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.621 00:28:44.621 test: (groupid=0, jobs=1): err= 0: pid=2108469: Mon Jun 10 12:06:38 2024 00:28:44.621 read: IOPS=9694, BW=37.9MiB/s (39.7MB/s)(76.0MiB/2006msec) 00:28:44.621 slat (usec): min=2, max=113, avg= 2.21, stdev= 1.08 00:28:44.621 clat (usec): min=2753, max=12136, avg=7295.26, stdev=577.78 00:28:44.621 lat (usec): min=2770, max=12138, avg=7297.47, stdev=577.73 00:28:44.621 clat percentiles (usec): 00:28:44.621 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:28:44.621 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:28:44.621 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8225], 00:28:44.621 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 9896], 99.95th=[11469], 00:28:44.621 | 99.99th=[12125] 00:28:44.621 bw ( KiB/s): min=37712, max=39376, per=99.94%, avg=38756.00, stdev=767.90, samples=4 00:28:44.621 iops : min= 9428, max= 9844, avg=9689.00, stdev=191.98, samples=4 00:28:44.621 write: IOPS=9700, BW=37.9MiB/s (39.7MB/s)(76.0MiB/2006msec); 0 zone resets 00:28:44.621 slat (nsec): min=2115, max=95420, avg=2310.36, stdev=733.03 00:28:44.621 clat (usec): min=1181, max=11305, avg=5818.04, stdev=505.38 00:28:44.621 lat (usec): min=1188, max=11307, avg=5820.35, stdev=505.36 00:28:44.621 clat percentiles (usec): 00:28:44.621 | 1.00th=[ 4621], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:28:44.621 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:28:44.621 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6587], 00:28:44.621 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 8979], 99.95th=[10421], 00:28:44.621 | 99.99th=[11207] 00:28:44.621 bw ( KiB/s): min=38288, max=39264, per=100.00%, avg=38804.00, stdev=401.68, samples=4 00:28:44.621 iops : min= 9572, max= 9816, avg=9701.00, stdev=100.42, samples=4 00:28:44.621 lat (msec) : 2=0.01%, 4=0.11%, 10=99.81%, 20=0.08% 00:28:44.621 cpu : usr=64.94%, sys=31.77%, ctx=49, majf=0, minf=6 00:28:44.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:44.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:44.621 issued rwts: total=19448,19459,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:44.621 00:28:44.621 Run status group 0 (all jobs): 00:28:44.621 READ: bw=37.9MiB/s (39.7MB/s), 37.9MiB/s-37.9MiB/s (39.7MB/s-39.7MB/s), io=76.0MiB (79.7MB), run=2006-2006msec 00:28:44.621 WRITE: bw=37.9MiB/s (39.7MB/s), 37.9MiB/s-37.9MiB/s (39.7MB/s-39.7MB/s), io=76.0MiB (79.7MB), run=2006-2006msec 00:28:44.621 12:06:38 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:44.621 12:06:38 -- host/fio.sh@74 -- # sync 00:28:44.621 12:06:38 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:46.529 12:06:40 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:46.789 12:06:40 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:28:47.361 12:06:40 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:47.361 12:06:41 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:49.905 12:06:43 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:49.905 12:06:43 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:49.905 12:06:43 -- host/fio.sh@86 -- # nvmftestfini 00:28:49.905 12:06:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:49.905 12:06:43 -- nvmf/common.sh@116 -- # sync 00:28:49.905 12:06:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:49.905 12:06:43 -- nvmf/common.sh@119 -- # set +e 00:28:49.905 12:06:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:49.905 12:06:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:49.905 rmmod nvme_tcp 00:28:49.905 rmmod nvme_fabrics 00:28:49.905 rmmod nvme_keyring 00:28:49.905 12:06:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:49.905 12:06:43 -- nvmf/common.sh@123 -- # set -e 00:28:49.906 12:06:43 -- nvmf/common.sh@124 -- # return 0 00:28:49.906 12:06:43 -- nvmf/common.sh@477 -- # '[' -n 2104618 ']' 00:28:49.906 12:06:43 -- nvmf/common.sh@478 -- # killprocess 2104618 00:28:49.906 12:06:43 -- common/autotest_common.sh@926 -- # '[' -z 2104618 ']' 00:28:49.906 12:06:43 -- common/autotest_common.sh@930 -- # kill -0 2104618 00:28:49.906 12:06:43 -- common/autotest_common.sh@931 -- # uname 00:28:49.906 12:06:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:49.906 12:06:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2104618 00:28:49.906 12:06:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:49.906 12:06:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:49.906 12:06:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2104618' 00:28:49.906 killing process with pid 2104618 00:28:49.906 12:06:43 -- common/autotest_common.sh@945 -- # kill 2104618 00:28:49.906 12:06:43 -- common/autotest_common.sh@950 -- # wait 2104618 00:28:49.906 12:06:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:49.906 12:06:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:49.906 12:06:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:49.906 12:06:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:49.906 12:06:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:49.906 12:06:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.906 12:06:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:49.906 12:06:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.821 12:06:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:51.821 00:28:51.821 real 0m32.856s 00:28:51.821 user 2m45.015s 00:28:51.821 sys 0m9.816s 00:28:51.821 12:06:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.821 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:28:51.821 ************************************ 00:28:51.821 END TEST nvmf_fio_host 00:28:51.821 ************************************ 00:28:51.821 12:06:45 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:51.821 12:06:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:51.821 12:06:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.821 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:28:51.821 ************************************ 00:28:51.821 START TEST nvmf_failover 00:28:51.821 ************************************ 00:28:51.821 12:06:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:51.821 * Looking for test storage... 00:28:51.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.821 12:06:45 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.821 12:06:45 -- nvmf/common.sh@7 -- # uname -s 00:28:51.821 12:06:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.821 12:06:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.821 12:06:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.821 12:06:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.821 12:06:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.821 12:06:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.821 12:06:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.821 12:06:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.821 12:06:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.821 12:06:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.083 12:06:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:52.083 12:06:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:52.083 12:06:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.083 12:06:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.083 12:06:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.083 12:06:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.083 12:06:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.083 12:06:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.083 12:06:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.083 12:06:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.083 12:06:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.083 12:06:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.083 12:06:45 -- paths/export.sh@5 -- # export PATH 00:28:52.083 12:06:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.083 12:06:45 -- nvmf/common.sh@46 -- # : 0 00:28:52.083 12:06:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:52.083 12:06:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:52.083 12:06:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:52.083 12:06:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.083 12:06:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.083 12:06:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:52.083 12:06:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:52.083 12:06:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:52.083 12:06:45 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:52.083 12:06:45 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:52.083 12:06:45 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:52.083 12:06:45 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:52.083 12:06:45 -- host/failover.sh@18 -- # nvmftestinit 00:28:52.083 12:06:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:52.083 12:06:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.083 12:06:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:52.083 12:06:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:52.083 12:06:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:52.083 12:06:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.083 12:06:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:52.083 12:06:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.083 12:06:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:52.083 12:06:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:52.083 12:06:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:52.083 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:29:00.228 12:06:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:00.228 12:06:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:00.228 12:06:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:00.228 12:06:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:00.228 12:06:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:00.228 12:06:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:00.228 12:06:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:00.228 12:06:52 -- nvmf/common.sh@294 -- # net_devs=() 00:29:00.228 12:06:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:00.228 12:06:52 -- nvmf/common.sh@295 -- # e810=() 00:29:00.228 12:06:52 -- nvmf/common.sh@295 -- # local -ga e810 00:29:00.228 12:06:52 -- nvmf/common.sh@296 -- # x722=() 00:29:00.228 12:06:52 -- nvmf/common.sh@296 -- # local -ga x722 00:29:00.228 12:06:52 -- nvmf/common.sh@297 -- # mlx=() 00:29:00.228 12:06:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:00.228 12:06:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.228 12:06:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:00.228 12:06:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:00.228 12:06:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:00.228 12:06:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:00.228 12:06:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:00.228 12:06:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:00.228 12:06:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:00.228 12:06:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:00.228 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:00.228 12:06:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:00.228 12:06:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:00.228 12:06:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.228 12:06:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.228 12:06:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:00.228 12:06:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:00.228 12:06:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:00.229 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:00.229 12:06:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:00.229 12:06:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:00.229 12:06:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.229 12:06:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:00.229 12:06:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.229 12:06:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:00.229 Found net devices under 0000:31:00.0: cvl_0_0 00:29:00.229 12:06:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.229 12:06:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:00.229 12:06:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.229 12:06:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:00.229 12:06:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.229 12:06:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:00.229 Found net devices under 0000:31:00.1: cvl_0_1 00:29:00.229 12:06:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.229 12:06:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:00.229 12:06:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:00.229 12:06:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:00.229 12:06:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.229 12:06:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.229 12:06:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.229 12:06:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:00.229 12:06:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.229 12:06:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.229 12:06:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:00.229 12:06:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.229 12:06:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.229 12:06:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:00.229 12:06:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:00.229 12:06:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.229 12:06:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.229 12:06:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.229 12:06:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.229 12:06:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:00.229 12:06:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.229 12:06:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.229 12:06:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.229 12:06:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:00.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:29:00.229 00:29:00.229 --- 10.0.0.2 ping statistics --- 00:29:00.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.229 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:29:00.229 12:06:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:29:00.229 00:29:00.229 --- 10.0.0.1 ping statistics --- 00:29:00.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.229 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:29:00.229 12:06:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.229 12:06:52 -- nvmf/common.sh@410 -- # return 0 00:29:00.229 12:06:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:00.229 12:06:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.229 12:06:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:00.229 12:06:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.229 12:06:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:00.229 12:06:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:00.229 12:06:52 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:00.229 12:06:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:00.229 12:06:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:00.229 12:06:52 -- common/autotest_common.sh@10 -- # set +x 00:29:00.229 12:06:52 -- nvmf/common.sh@469 -- # nvmfpid=2113980 00:29:00.229 12:06:52 -- nvmf/common.sh@470 -- # waitforlisten 2113980 00:29:00.229 12:06:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:00.229 12:06:52 -- common/autotest_common.sh@819 -- # '[' -z 2113980 ']' 00:29:00.229 12:06:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.229 12:06:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:00.229 12:06:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.229 12:06:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:00.229 12:06:52 -- common/autotest_common.sh@10 -- # set +x 00:29:00.229 [2024-06-10 12:06:52.968874] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:00.229 [2024-06-10 12:06:52.968935] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.229 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.229 [2024-06-10 12:06:53.058289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.229 [2024-06-10 12:06:53.148666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:00.229 [2024-06-10 12:06:53.148834] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.229 [2024-06-10 12:06:53.148845] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.229 [2024-06-10 12:06:53.148853] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.229 [2024-06-10 12:06:53.149032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.229 [2024-06-10 12:06:53.149198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.229 [2024-06-10 12:06:53.149199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.229 12:06:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:00.229 12:06:53 -- common/autotest_common.sh@852 -- # return 0 00:29:00.229 12:06:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:00.229 12:06:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:00.229 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:29:00.229 12:06:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.229 12:06:53 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:00.229 [2024-06-10 12:06:53.912771] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.229 12:06:53 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:00.490 Malloc0 00:29:00.490 12:06:54 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.751 12:06:54 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:00.751 12:06:54 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.012 [2024-06-10 12:06:54.578754] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.012 12:06:54 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:01.012 [2024-06-10 12:06:54.735165] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:01.012 12:06:54 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:01.272 [2024-06-10 12:06:54.891654] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:01.272 12:06:54 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:01.272 12:06:54 -- host/failover.sh@31 -- # bdevperf_pid=2114386 00:29:01.272 12:06:54 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.272 12:06:54 -- host/failover.sh@34 -- # waitforlisten 2114386 /var/tmp/bdevperf.sock 00:29:01.273 12:06:54 -- common/autotest_common.sh@819 -- # '[' -z 2114386 ']' 00:29:01.273 12:06:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.273 12:06:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:01.273 12:06:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.273 12:06:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:01.273 12:06:54 -- common/autotest_common.sh@10 -- # set +x 00:29:02.215 12:06:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:02.215 12:06:55 -- common/autotest_common.sh@852 -- # return 0 00:29:02.215 12:06:55 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:02.476 NVMe0n1 00:29:02.476 12:06:56 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:02.736 00:29:02.736 12:06:56 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:02.736 12:06:56 -- host/failover.sh@39 -- # run_test_pid=2114692 00:29:02.736 12:06:56 -- host/failover.sh@41 -- # sleep 1 00:29:04.122 12:06:57 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.122 [2024-06-10 12:06:57.625161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.122 [2024-06-10 12:06:57.625412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 [2024-06-10 12:06:57.625416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 [2024-06-10 12:06:57.625420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 [2024-06-10 12:06:57.625425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 [2024-06-10 12:06:57.625429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 [2024-06-10 12:06:57.625434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 [2024-06-10 12:06:57.625438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 [2024-06-10 12:06:57.625443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 [2024-06-10 12:06:57.625447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 [2024-06-10 12:06:57.625451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4faf0 is same with the state(5) to be set 00:29:04.123 12:06:57 -- host/failover.sh@45 -- # sleep 3 00:29:07.422 12:07:00 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:07.422 00:29:07.422 12:07:01 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:07.422 [2024-06-10 12:07:01.156433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.422 [2024-06-10 12:07:01.156545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 [2024-06-10 12:07:01.156769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e511e0 is same with the state(5) to be set 00:29:07.423 12:07:01 -- host/failover.sh@50 -- # sleep 3 00:29:10.787 12:07:04 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.787 [2024-06-10 12:07:04.324872] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.787 12:07:04 -- host/failover.sh@55 -- # sleep 1 00:29:11.730 12:07:05 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:11.731 [2024-06-10 12:07:05.499611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.731 [2024-06-10 12:07:05.499894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.732 [2024-06-10 12:07:05.499899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.732 [2024-06-10 12:07:05.499904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.732 [2024-06-10 12:07:05.499908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.732 [2024-06-10 12:07:05.499912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.732 [2024-06-10 12:07:05.499917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e518a0 is same with the state(5) to be set 00:29:11.993 12:07:05 -- host/failover.sh@59 -- # wait 2114692 00:29:18.587 0 00:29:18.587 12:07:11 -- host/failover.sh@61 -- # killprocess 2114386 00:29:18.587 12:07:11 -- common/autotest_common.sh@926 -- # '[' -z 2114386 ']' 00:29:18.587 12:07:11 -- common/autotest_common.sh@930 -- # kill -0 2114386 00:29:18.587 12:07:11 -- common/autotest_common.sh@931 -- # uname 00:29:18.587 12:07:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:18.587 12:07:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2114386 00:29:18.587 12:07:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:18.587 12:07:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:18.587 12:07:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2114386' 00:29:18.587 killing process with pid 2114386 00:29:18.587 12:07:11 -- common/autotest_common.sh@945 -- # kill 2114386 00:29:18.587 12:07:11 -- common/autotest_common.sh@950 -- # wait 2114386 00:29:18.587 12:07:11 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:18.587 [2024-06-10 12:06:54.955559] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:18.587 [2024-06-10 12:06:54.955616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114386 ] 00:29:18.587 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.587 [2024-06-10 12:06:55.015440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.587 [2024-06-10 12:06:55.078011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.587 Running I/O for 15 seconds... 00:29:18.587 [2024-06-10 12:06:57.625843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.625877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.625893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.625902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.625913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.625920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.625929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.625937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.625946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.625953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.625962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.625970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.625979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.625986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.625995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.587 [2024-06-10 12:06:57.626447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.587 [2024-06-10 12:06:57.626463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.587 [2024-06-10 12:06:57.626480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.587 [2024-06-10 12:06:57.626496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.587 [2024-06-10 12:06:57.626552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.587 [2024-06-10 12:06:57.626560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.626800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.626815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.626831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.626847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.626862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.626882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.626898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.626987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.626995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.588 [2024-06-10 12:06:57.627397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.588 [2024-06-10 12:06:57.627406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.588 [2024-06-10 12:06:57.627412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.589 [2024-06-10 12:06:57.627832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:06:57.627929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.627948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.589 [2024-06-10 12:06:57.627954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.589 [2024-06-10 12:06:57.627961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38800 len:8 PRP1 0x0 PRP2 0x0 00:29:18.589 [2024-06-10 12:06:57.627969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.628004] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16b5930 was disconnected and freed. reset controller. 00:29:18.589 [2024-06-10 12:06:57.628019] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:18.589 [2024-06-10 12:06:57.628036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.589 [2024-06-10 12:06:57.628044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.628052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.589 [2024-06-10 12:06:57.628059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.628067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.589 [2024-06-10 12:06:57.628074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.628082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.589 [2024-06-10 12:06:57.628089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:06:57.628096] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.589 [2024-06-10 12:06:57.630279] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.589 [2024-06-10 12:06:57.630301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1696bd0 (9): Bad file descriptor 00:29:18.589 [2024-06-10 12:06:57.663360] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:18.589 [2024-06-10 12:07:01.157104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.589 [2024-06-10 12:07:01.157318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.589 [2024-06-10 12:07:01.157327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.590 [2024-06-10 12:07:01.157847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.590 [2024-06-10 12:07:01.157853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.157862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.157869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.157878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.157885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.157894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.157901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.157910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.157917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.157926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.157933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.157942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.157949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.157958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.157966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.157975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.157982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.157991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.157998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.591 [2024-06-10 12:07:01.158486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.591 [2024-06-10 12:07:01.158495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.591 [2024-06-10 12:07:01.158502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.158974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.158990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.158999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.159006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.159016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.159023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.159032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.159040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.159049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.592 [2024-06-10 12:07:01.159057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.159066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.159073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.159082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.159088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.159097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.159104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.159113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.159120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.159129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.159136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.592 [2024-06-10 12:07:01.159145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.592 [2024-06-10 12:07:01.159152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:01.159168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:01.159184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:01.159201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:01.159216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a3090 is same with the state(5) to be set 00:29:18.593 [2024-06-10 12:07:01.159233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.593 [2024-06-10 12:07:01.159241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.593 [2024-06-10 12:07:01.159260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:29:18.593 [2024-06-10 12:07:01.159268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159306] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16a3090 was disconnected and freed. reset controller. 00:29:18.593 [2024-06-10 12:07:01.159315] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:18.593 [2024-06-10 12:07:01.159335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.593 [2024-06-10 12:07:01.159342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.593 [2024-06-10 12:07:01.159358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.593 [2024-06-10 12:07:01.159373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.593 [2024-06-10 12:07:01.159387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:01.159395] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.593 [2024-06-10 12:07:01.159419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1696bd0 (9): Bad file descriptor 00:29:18.593 [2024-06-10 12:07:01.161746] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.593 [2024-06-10 12:07:01.194788] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:18.593 [2024-06-10 12:07:05.500572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.593 [2024-06-10 12:07:05.500870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.593 [2024-06-10 12:07:05.500886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.500951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.593 [2024-06-10 12:07:05.500967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.593 [2024-06-10 12:07:05.500983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.500992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.593 [2024-06-10 12:07:05.500999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.501008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.593 [2024-06-10 12:07:05.501015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.501024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.501031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.501040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.593 [2024-06-10 12:07:05.501047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.593 [2024-06-10 12:07:05.501056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.594 [2024-06-10 12:07:05.501633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.594 [2024-06-10 12:07:05.501699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.594 [2024-06-10 12:07:05.501708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.501796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.501829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.501861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.501892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.501924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.501957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.501973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.501988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.501997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.502004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.502020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.502036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.502052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.502068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.502083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.502099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.502115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.502132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.595 [2024-06-10 12:07:05.502148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.502164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.502180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.502196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.502212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.502228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.595 [2024-06-10 12:07:05.502238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.595 [2024-06-10 12:07:05.502248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.596 [2024-06-10 12:07:05.502647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.596 [2024-06-10 12:07:05.502662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.596 [2024-06-10 12:07:05.502691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.596 [2024-06-10 12:07:05.502697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5560 len:8 PRP1 0x0 PRP2 0x0 00:29:18.596 [2024-06-10 12:07:05.502705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502742] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16b9750 was disconnected and freed. reset controller. 00:29:18.596 [2024-06-10 12:07:05.502752] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:18.596 [2024-06-10 12:07:05.502771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.596 [2024-06-10 12:07:05.502779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.596 [2024-06-10 12:07:05.502798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.596 [2024-06-10 12:07:05.502815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.596 [2024-06-10 12:07:05.502830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.596 [2024-06-10 12:07:05.502838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.596 [2024-06-10 12:07:05.502861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1696bd0 (9): Bad file descriptor 00:29:18.596 [2024-06-10 12:07:05.505301] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.596 [2024-06-10 12:07:05.579170] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:18.596 00:29:18.596 Latency(us) 00:29:18.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.596 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:18.596 Verification LBA range: start 0x0 length 0x4000 00:29:18.596 NVMe0n1 : 15.00 19878.92 77.65 506.30 0.00 6263.12 785.07 13052.59 00:29:18.596 =================================================================================================================== 00:29:18.596 Total : 19878.92 77.65 506.30 0.00 6263.12 785.07 13052.59 00:29:18.596 Received shutdown signal, test time was about 15.000000 seconds 00:29:18.596 00:29:18.596 Latency(us) 00:29:18.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.596 =================================================================================================================== 00:29:18.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.596 12:07:11 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:18.596 12:07:11 -- host/failover.sh@65 -- # count=3 00:29:18.596 12:07:11 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:18.596 12:07:11 -- host/failover.sh@73 -- # bdevperf_pid=2117741 00:29:18.596 12:07:11 -- host/failover.sh@75 -- # waitforlisten 2117741 /var/tmp/bdevperf.sock 00:29:18.597 12:07:11 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:18.597 12:07:11 -- common/autotest_common.sh@819 -- # '[' -z 2117741 ']' 00:29:18.597 12:07:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:18.597 12:07:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:18.597 12:07:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:18.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:18.597 12:07:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:18.597 12:07:11 -- common/autotest_common.sh@10 -- # set +x 00:29:19.167 12:07:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:19.167 12:07:12 -- common/autotest_common.sh@852 -- # return 0 00:29:19.167 12:07:12 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:19.167 [2024-06-10 12:07:12.766489] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:19.167 12:07:12 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:19.167 [2024-06-10 12:07:12.930901] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:19.427 12:07:12 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.427 NVMe0n1 00:29:19.427 12:07:13 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.687 00:29:19.687 12:07:13 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.949 00:29:19.949 12:07:13 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:19.949 12:07:13 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:20.210 12:07:13 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.470 12:07:13 -- host/failover.sh@87 -- # sleep 3 00:29:23.775 12:07:17 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:23.775 12:07:17 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:23.775 12:07:17 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:23.775 12:07:17 -- host/failover.sh@90 -- # run_test_pid=2118773 00:29:23.775 12:07:17 -- host/failover.sh@92 -- # wait 2118773 00:29:24.716 0 00:29:24.716 12:07:18 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:24.716 [2024-06-10 12:07:11.859373] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:24.716 [2024-06-10 12:07:11.859431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117741 ] 00:29:24.716 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.716 [2024-06-10 12:07:11.919091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.716 [2024-06-10 12:07:11.980921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.716 [2024-06-10 12:07:13.969162] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:24.716 [2024-06-10 12:07:13.969208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.716 [2024-06-10 12:07:13.969218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.716 [2024-06-10 12:07:13.969228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.716 [2024-06-10 12:07:13.969236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.716 [2024-06-10 12:07:13.969248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.716 [2024-06-10 12:07:13.969255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.716 [2024-06-10 12:07:13.969263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.716 [2024-06-10 12:07:13.969269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.716 [2024-06-10 12:07:13.969277] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.716 [2024-06-10 12:07:13.969300] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.716 [2024-06-10 12:07:13.969313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2dbd0 (9): Bad file descriptor 00:29:24.716 [2024-06-10 12:07:13.980530] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:24.716 Running I/O for 1 seconds... 00:29:24.716 00:29:24.716 Latency(us) 00:29:24.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.716 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:24.716 Verification LBA range: start 0x0 length 0x4000 00:29:24.716 NVMe0n1 : 1.00 19970.04 78.01 0.00 0.00 6380.03 1140.05 8355.84 00:29:24.716 =================================================================================================================== 00:29:24.716 Total : 19970.04 78.01 0.00 0.00 6380.03 1140.05 8355.84 00:29:24.716 12:07:18 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.716 12:07:18 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:24.716 12:07:18 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:24.978 12:07:18 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.978 12:07:18 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:25.238 12:07:18 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:25.238 12:07:18 -- host/failover.sh@101 -- # sleep 3 00:29:28.545 12:07:21 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:28.545 12:07:21 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:28.545 12:07:22 -- host/failover.sh@108 -- # killprocess 2117741 00:29:28.545 12:07:22 -- common/autotest_common.sh@926 -- # '[' -z 2117741 ']' 00:29:28.545 12:07:22 -- common/autotest_common.sh@930 -- # kill -0 2117741 00:29:28.545 12:07:22 -- common/autotest_common.sh@931 -- # uname 00:29:28.545 12:07:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:28.545 12:07:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2117741 00:29:28.545 12:07:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:28.545 12:07:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:28.545 12:07:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2117741' 00:29:28.545 killing process with pid 2117741 00:29:28.545 12:07:22 -- common/autotest_common.sh@945 -- # kill 2117741 00:29:28.545 12:07:22 -- common/autotest_common.sh@950 -- # wait 2117741 00:29:28.545 12:07:22 -- host/failover.sh@110 -- # sync 00:29:28.545 12:07:22 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:28.806 12:07:22 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:28.806 12:07:22 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:28.806 12:07:22 -- host/failover.sh@116 -- # nvmftestfini 00:29:28.806 12:07:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:28.806 12:07:22 -- nvmf/common.sh@116 -- # sync 00:29:28.806 12:07:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:28.806 12:07:22 -- nvmf/common.sh@119 -- # set +e 00:29:28.806 12:07:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:28.806 12:07:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:28.806 rmmod nvme_tcp 00:29:28.806 rmmod nvme_fabrics 00:29:28.806 rmmod nvme_keyring 00:29:28.806 12:07:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:28.806 12:07:22 -- nvmf/common.sh@123 -- # set -e 00:29:28.806 12:07:22 -- nvmf/common.sh@124 -- # return 0 00:29:28.806 12:07:22 -- nvmf/common.sh@477 -- # '[' -n 2113980 ']' 00:29:28.806 12:07:22 -- nvmf/common.sh@478 -- # killprocess 2113980 00:29:28.806 12:07:22 -- common/autotest_common.sh@926 -- # '[' -z 2113980 ']' 00:29:28.806 12:07:22 -- common/autotest_common.sh@930 -- # kill -0 2113980 00:29:28.806 12:07:22 -- common/autotest_common.sh@931 -- # uname 00:29:28.806 12:07:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:28.806 12:07:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2113980 00:29:28.806 12:07:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:28.806 12:07:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:28.806 12:07:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2113980' 00:29:28.806 killing process with pid 2113980 00:29:28.806 12:07:22 -- common/autotest_common.sh@945 -- # kill 2113980 00:29:28.806 12:07:22 -- common/autotest_common.sh@950 -- # wait 2113980 00:29:29.067 12:07:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:29.068 12:07:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:29.068 12:07:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:29.068 12:07:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:29.068 12:07:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:29.068 12:07:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.068 12:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.068 12:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.614 12:07:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:31.614 00:29:31.614 real 0m39.278s 00:29:31.614 user 2m0.466s 00:29:31.614 sys 0m8.215s 00:29:31.614 12:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:31.614 12:07:24 -- common/autotest_common.sh@10 -- # set +x 00:29:31.614 ************************************ 00:29:31.614 END TEST nvmf_failover 00:29:31.614 ************************************ 00:29:31.614 12:07:24 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:31.614 12:07:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:31.614 12:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:31.614 12:07:24 -- common/autotest_common.sh@10 -- # set +x 00:29:31.614 ************************************ 00:29:31.614 START TEST nvmf_discovery 00:29:31.614 ************************************ 00:29:31.614 12:07:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:31.614 * Looking for test storage... 00:29:31.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:31.614 12:07:24 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.614 12:07:24 -- nvmf/common.sh@7 -- # uname -s 00:29:31.614 12:07:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.614 12:07:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.614 12:07:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.614 12:07:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.614 12:07:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.614 12:07:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.614 12:07:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.614 12:07:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.614 12:07:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.614 12:07:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.614 12:07:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:31.614 12:07:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:31.614 12:07:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.614 12:07:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.614 12:07:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.614 12:07:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.614 12:07:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.615 12:07:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.615 12:07:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.615 12:07:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.615 12:07:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.615 12:07:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.615 12:07:24 -- paths/export.sh@5 -- # export PATH 00:29:31.615 12:07:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.615 12:07:24 -- nvmf/common.sh@46 -- # : 0 00:29:31.615 12:07:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:31.615 12:07:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:31.615 12:07:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:31.615 12:07:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.615 12:07:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.615 12:07:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:31.615 12:07:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:31.615 12:07:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:31.615 12:07:24 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:31.615 12:07:24 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:31.615 12:07:24 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:31.615 12:07:24 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:31.615 12:07:24 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:31.615 12:07:24 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:31.615 12:07:24 -- host/discovery.sh@25 -- # nvmftestinit 00:29:31.615 12:07:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:31.615 12:07:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.615 12:07:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:31.615 12:07:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:31.615 12:07:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:31.615 12:07:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.615 12:07:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.615 12:07:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.615 12:07:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:31.615 12:07:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:31.615 12:07:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:31.615 12:07:24 -- common/autotest_common.sh@10 -- # set +x 00:29:38.203 12:07:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:38.203 12:07:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:38.203 12:07:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:38.203 12:07:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:38.203 12:07:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:38.203 12:07:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:38.203 12:07:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:38.203 12:07:31 -- nvmf/common.sh@294 -- # net_devs=() 00:29:38.203 12:07:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:38.203 12:07:31 -- nvmf/common.sh@295 -- # e810=() 00:29:38.203 12:07:31 -- nvmf/common.sh@295 -- # local -ga e810 00:29:38.203 12:07:31 -- nvmf/common.sh@296 -- # x722=() 00:29:38.203 12:07:31 -- nvmf/common.sh@296 -- # local -ga x722 00:29:38.203 12:07:31 -- nvmf/common.sh@297 -- # mlx=() 00:29:38.203 12:07:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:38.203 12:07:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.203 12:07:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:38.203 12:07:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:38.203 12:07:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:38.203 12:07:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:38.203 12:07:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:38.203 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:38.203 12:07:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:38.203 12:07:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:38.203 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:38.203 12:07:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:38.203 12:07:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:38.203 12:07:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:38.203 12:07:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.203 12:07:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:38.203 12:07:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.203 12:07:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:38.203 Found net devices under 0000:31:00.0: cvl_0_0 00:29:38.203 12:07:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.203 12:07:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:38.203 12:07:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.203 12:07:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:38.203 12:07:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.203 12:07:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:38.204 Found net devices under 0000:31:00.1: cvl_0_1 00:29:38.204 12:07:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.204 12:07:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:38.204 12:07:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:38.204 12:07:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:38.204 12:07:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:38.204 12:07:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:38.204 12:07:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.204 12:07:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.204 12:07:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.204 12:07:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:38.204 12:07:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.204 12:07:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.204 12:07:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:38.204 12:07:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.204 12:07:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.204 12:07:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:38.204 12:07:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:38.204 12:07:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.204 12:07:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.204 12:07:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.204 12:07:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.204 12:07:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:38.204 12:07:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.204 12:07:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.204 12:07:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.204 12:07:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:38.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:29:38.204 00:29:38.204 --- 10.0.0.2 ping statistics --- 00:29:38.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.204 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:29:38.204 12:07:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:29:38.204 00:29:38.204 --- 10.0.0.1 ping statistics --- 00:29:38.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.204 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:29:38.204 12:07:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.204 12:07:31 -- nvmf/common.sh@410 -- # return 0 00:29:38.204 12:07:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:38.204 12:07:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.204 12:07:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:38.204 12:07:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:38.204 12:07:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.204 12:07:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:38.204 12:07:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:38.204 12:07:31 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:38.204 12:07:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:38.204 12:07:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:38.204 12:07:31 -- common/autotest_common.sh@10 -- # set +x 00:29:38.204 12:07:31 -- nvmf/common.sh@469 -- # nvmfpid=2123987 00:29:38.204 12:07:31 -- nvmf/common.sh@470 -- # waitforlisten 2123987 00:29:38.204 12:07:31 -- common/autotest_common.sh@819 -- # '[' -z 2123987 ']' 00:29:38.204 12:07:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.204 12:07:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:38.204 12:07:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:38.204 12:07:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.204 12:07:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:38.204 12:07:31 -- common/autotest_common.sh@10 -- # set +x 00:29:38.204 [2024-06-10 12:07:31.940223] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:38.204 [2024-06-10 12:07:31.940292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.466 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.466 [2024-06-10 12:07:32.029009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.466 [2024-06-10 12:07:32.120435] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:38.466 [2024-06-10 12:07:32.120588] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.466 [2024-06-10 12:07:32.120598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.466 [2024-06-10 12:07:32.120606] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.466 [2024-06-10 12:07:32.120630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.040 12:07:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:39.040 12:07:32 -- common/autotest_common.sh@852 -- # return 0 00:29:39.040 12:07:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:39.040 12:07:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:39.040 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:29:39.040 12:07:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.040 12:07:32 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.040 12:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.040 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:29:39.040 [2024-06-10 12:07:32.779964] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.040 12:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.040 12:07:32 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:39.040 12:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.040 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:29:39.040 [2024-06-10 12:07:32.788155] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:39.040 12:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.040 12:07:32 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:39.040 12:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.040 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:29:39.040 null0 00:29:39.040 12:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.040 12:07:32 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:39.040 12:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.040 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:29:39.040 null1 00:29:39.040 12:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.041 12:07:32 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:39.041 12:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.041 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:29:39.301 12:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.301 12:07:32 -- host/discovery.sh@45 -- # hostpid=2124225 00:29:39.301 12:07:32 -- host/discovery.sh@46 -- # waitforlisten 2124225 /tmp/host.sock 00:29:39.301 12:07:32 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:39.301 12:07:32 -- common/autotest_common.sh@819 -- # '[' -z 2124225 ']' 00:29:39.301 12:07:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:29:39.301 12:07:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:39.301 12:07:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:39.301 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:39.301 12:07:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:39.301 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:29:39.301 [2024-06-10 12:07:32.865141] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:39.301 [2024-06-10 12:07:32.865211] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124225 ] 00:29:39.301 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.301 [2024-06-10 12:07:32.931800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.301 [2024-06-10 12:07:33.004160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:39.301 [2024-06-10 12:07:33.004298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.872 12:07:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:39.872 12:07:33 -- common/autotest_common.sh@852 -- # return 0 00:29:39.872 12:07:33 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:39.872 12:07:33 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:39.872 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.133 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.133 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.133 12:07:33 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:40.133 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.133 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.133 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.133 12:07:33 -- host/discovery.sh@72 -- # notify_id=0 00:29:40.133 12:07:33 -- host/discovery.sh@78 -- # get_subsystem_names 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:40.133 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # sort 00:29:40.133 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # xargs 00:29:40.133 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.133 12:07:33 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:29:40.133 12:07:33 -- host/discovery.sh@79 -- # get_bdev_list 00:29:40.133 12:07:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.133 12:07:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:40.133 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.133 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.133 12:07:33 -- host/discovery.sh@55 -- # sort 00:29:40.133 12:07:33 -- host/discovery.sh@55 -- # xargs 00:29:40.133 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.133 12:07:33 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:29:40.133 12:07:33 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:40.133 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.133 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.133 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.133 12:07:33 -- host/discovery.sh@82 -- # get_subsystem_names 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:40.133 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:40.133 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # sort 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # xargs 00:29:40.133 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.133 12:07:33 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:29:40.133 12:07:33 -- host/discovery.sh@83 -- # get_bdev_list 00:29:40.133 12:07:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.133 12:07:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:40.133 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.133 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.133 12:07:33 -- host/discovery.sh@55 -- # sort 00:29:40.133 12:07:33 -- host/discovery.sh@55 -- # xargs 00:29:40.133 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.133 12:07:33 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:40.133 12:07:33 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:40.133 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.133 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.133 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.133 12:07:33 -- host/discovery.sh@86 -- # get_subsystem_names 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:40.133 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.133 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # sort 00:29:40.133 12:07:33 -- host/discovery.sh@59 -- # xargs 00:29:40.133 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.393 12:07:33 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:29:40.393 12:07:33 -- host/discovery.sh@87 -- # get_bdev_list 00:29:40.393 12:07:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.393 12:07:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:40.393 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.393 12:07:33 -- host/discovery.sh@55 -- # sort 00:29:40.393 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.393 12:07:33 -- host/discovery.sh@55 -- # xargs 00:29:40.393 12:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.393 12:07:33 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:40.393 12:07:33 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:40.393 12:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.393 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:29:40.393 [2024-06-10 12:07:33.999267] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.393 12:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.393 12:07:34 -- host/discovery.sh@92 -- # get_subsystem_names 00:29:40.393 12:07:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:40.393 12:07:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:40.393 12:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.393 12:07:34 -- common/autotest_common.sh@10 -- # set +x 00:29:40.393 12:07:34 -- host/discovery.sh@59 -- # sort 00:29:40.393 12:07:34 -- host/discovery.sh@59 -- # xargs 00:29:40.393 12:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.393 12:07:34 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:40.394 12:07:34 -- host/discovery.sh@93 -- # get_bdev_list 00:29:40.394 12:07:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.394 12:07:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:40.394 12:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.394 12:07:34 -- host/discovery.sh@55 -- # sort 00:29:40.394 12:07:34 -- common/autotest_common.sh@10 -- # set +x 00:29:40.394 12:07:34 -- host/discovery.sh@55 -- # xargs 00:29:40.394 12:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.394 12:07:34 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:29:40.394 12:07:34 -- host/discovery.sh@94 -- # get_notification_count 00:29:40.394 12:07:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:40.394 12:07:34 -- host/discovery.sh@74 -- # jq '. | length' 00:29:40.394 12:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.394 12:07:34 -- common/autotest_common.sh@10 -- # set +x 00:29:40.394 12:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.394 12:07:34 -- host/discovery.sh@74 -- # notification_count=0 00:29:40.394 12:07:34 -- host/discovery.sh@75 -- # notify_id=0 00:29:40.394 12:07:34 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:29:40.394 12:07:34 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:40.394 12:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.394 12:07:34 -- common/autotest_common.sh@10 -- # set +x 00:29:40.394 12:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.394 12:07:34 -- host/discovery.sh@100 -- # sleep 1 00:29:40.965 [2024-06-10 12:07:34.709343] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:40.965 [2024-06-10 12:07:34.709367] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:40.965 [2024-06-10 12:07:34.709382] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:41.225 [2024-06-10 12:07:34.797657] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:41.226 [2024-06-10 12:07:34.983590] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:41.226 [2024-06-10 12:07:34.983614] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:41.486 12:07:35 -- host/discovery.sh@101 -- # get_subsystem_names 00:29:41.486 12:07:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:41.486 12:07:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:41.486 12:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.486 12:07:35 -- host/discovery.sh@59 -- # sort 00:29:41.486 12:07:35 -- common/autotest_common.sh@10 -- # set +x 00:29:41.486 12:07:35 -- host/discovery.sh@59 -- # xargs 00:29:41.486 12:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.486 12:07:35 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.486 12:07:35 -- host/discovery.sh@102 -- # get_bdev_list 00:29:41.486 12:07:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:41.486 12:07:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:41.486 12:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.486 12:07:35 -- host/discovery.sh@55 -- # sort 00:29:41.486 12:07:35 -- common/autotest_common.sh@10 -- # set +x 00:29:41.486 12:07:35 -- host/discovery.sh@55 -- # xargs 00:29:41.486 12:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.748 12:07:35 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:41.748 12:07:35 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:29:41.748 12:07:35 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:41.748 12:07:35 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:41.748 12:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.748 12:07:35 -- common/autotest_common.sh@10 -- # set +x 00:29:41.748 12:07:35 -- host/discovery.sh@63 -- # sort -n 00:29:41.748 12:07:35 -- host/discovery.sh@63 -- # xargs 00:29:41.748 12:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.748 12:07:35 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:29:41.748 12:07:35 -- host/discovery.sh@104 -- # get_notification_count 00:29:41.748 12:07:35 -- host/discovery.sh@74 -- # jq '. | length' 00:29:41.748 12:07:35 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:41.748 12:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.748 12:07:35 -- common/autotest_common.sh@10 -- # set +x 00:29:41.748 12:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.748 12:07:35 -- host/discovery.sh@74 -- # notification_count=1 00:29:41.748 12:07:35 -- host/discovery.sh@75 -- # notify_id=1 00:29:41.748 12:07:35 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:29:41.748 12:07:35 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:41.748 12:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.748 12:07:35 -- common/autotest_common.sh@10 -- # set +x 00:29:41.748 12:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.748 12:07:35 -- host/discovery.sh@109 -- # sleep 1 00:29:42.690 12:07:36 -- host/discovery.sh@110 -- # get_bdev_list 00:29:42.690 12:07:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:42.690 12:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.690 12:07:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:42.690 12:07:36 -- common/autotest_common.sh@10 -- # set +x 00:29:42.690 12:07:36 -- host/discovery.sh@55 -- # sort 00:29:42.690 12:07:36 -- host/discovery.sh@55 -- # xargs 00:29:42.690 12:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.690 12:07:36 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:42.690 12:07:36 -- host/discovery.sh@111 -- # get_notification_count 00:29:42.690 12:07:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:42.690 12:07:36 -- host/discovery.sh@74 -- # jq '. | length' 00:29:42.690 12:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.690 12:07:36 -- common/autotest_common.sh@10 -- # set +x 00:29:42.690 12:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.690 12:07:36 -- host/discovery.sh@74 -- # notification_count=1 00:29:42.951 12:07:36 -- host/discovery.sh@75 -- # notify_id=2 00:29:42.951 12:07:36 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:29:42.951 12:07:36 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:42.951 12:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.951 12:07:36 -- common/autotest_common.sh@10 -- # set +x 00:29:42.951 [2024-06-10 12:07:36.469846] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:42.951 [2024-06-10 12:07:36.470075] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:42.951 [2024-06-10 12:07:36.470104] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:42.951 12:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.951 12:07:36 -- host/discovery.sh@117 -- # sleep 1 00:29:42.951 [2024-06-10 12:07:36.559319] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:43.211 [2024-06-10 12:07:36.829734] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:43.211 [2024-06-10 12:07:36.829752] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:43.211 [2024-06-10 12:07:36.829758] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:43.783 12:07:37 -- host/discovery.sh@118 -- # get_subsystem_names 00:29:43.783 12:07:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:43.783 12:07:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:43.783 12:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.783 12:07:37 -- host/discovery.sh@59 -- # sort 00:29:43.783 12:07:37 -- common/autotest_common.sh@10 -- # set +x 00:29:43.783 12:07:37 -- host/discovery.sh@59 -- # xargs 00:29:43.783 12:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.783 12:07:37 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.783 12:07:37 -- host/discovery.sh@119 -- # get_bdev_list 00:29:43.783 12:07:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:43.783 12:07:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:43.783 12:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.783 12:07:37 -- host/discovery.sh@55 -- # sort 00:29:43.783 12:07:37 -- common/autotest_common.sh@10 -- # set +x 00:29:43.783 12:07:37 -- host/discovery.sh@55 -- # xargs 00:29:44.044 12:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.044 12:07:37 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:44.044 12:07:37 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:29:44.044 12:07:37 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:44.044 12:07:37 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:44.044 12:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.044 12:07:37 -- common/autotest_common.sh@10 -- # set +x 00:29:44.044 12:07:37 -- host/discovery.sh@63 -- # sort -n 00:29:44.044 12:07:37 -- host/discovery.sh@63 -- # xargs 00:29:44.044 12:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.044 12:07:37 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:44.044 12:07:37 -- host/discovery.sh@121 -- # get_notification_count 00:29:44.044 12:07:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:44.044 12:07:37 -- host/discovery.sh@74 -- # jq '. | length' 00:29:44.044 12:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.044 12:07:37 -- common/autotest_common.sh@10 -- # set +x 00:29:44.044 12:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.044 12:07:37 -- host/discovery.sh@74 -- # notification_count=0 00:29:44.044 12:07:37 -- host/discovery.sh@75 -- # notify_id=2 00:29:44.044 12:07:37 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:29:44.044 12:07:37 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:44.044 12:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.044 12:07:37 -- common/autotest_common.sh@10 -- # set +x 00:29:44.044 [2024-06-10 12:07:37.689315] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:44.044 [2024-06-10 12:07:37.689337] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:44.044 12:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.044 12:07:37 -- host/discovery.sh@127 -- # sleep 1 00:29:44.044 [2024-06-10 12:07:37.697348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.044 [2024-06-10 12:07:37.697369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.044 [2024-06-10 12:07:37.697379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.044 [2024-06-10 12:07:37.697387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.044 [2024-06-10 12:07:37.697394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.045 [2024-06-10 12:07:37.697401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.045 [2024-06-10 12:07:37.697409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.045 [2024-06-10 12:07:37.697416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.045 [2024-06-10 12:07:37.697423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227bb10 is same with the state(5) to be set 00:29:44.045 [2024-06-10 12:07:37.707363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227bb10 (9): Bad file descriptor 00:29:44.045 [2024-06-10 12:07:37.717408] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:44.045 [2024-06-10 12:07:37.717774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.718124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.718135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227bb10 with addr=10.0.0.2, port=4420 00:29:44.045 [2024-06-10 12:07:37.718146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227bb10 is same with the state(5) to be set 00:29:44.045 [2024-06-10 12:07:37.718158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227bb10 (9): Bad file descriptor 00:29:44.045 [2024-06-10 12:07:37.718171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:44.045 [2024-06-10 12:07:37.718180] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:44.045 [2024-06-10 12:07:37.718190] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:44.045 [2024-06-10 12:07:37.718203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.045 [2024-06-10 12:07:37.727462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:44.045 [2024-06-10 12:07:37.727822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.728040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.728051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227bb10 with addr=10.0.0.2, port=4420 00:29:44.045 [2024-06-10 12:07:37.728059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227bb10 is same with the state(5) to be set 00:29:44.045 [2024-06-10 12:07:37.728070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227bb10 (9): Bad file descriptor 00:29:44.045 [2024-06-10 12:07:37.728081] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:44.045 [2024-06-10 12:07:37.728087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:44.045 [2024-06-10 12:07:37.728094] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:44.045 [2024-06-10 12:07:37.728104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.045 [2024-06-10 12:07:37.737516] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:44.045 [2024-06-10 12:07:37.737874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.738135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.738145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227bb10 with addr=10.0.0.2, port=4420 00:29:44.045 [2024-06-10 12:07:37.738153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227bb10 is same with the state(5) to be set 00:29:44.045 [2024-06-10 12:07:37.738165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227bb10 (9): Bad file descriptor 00:29:44.045 [2024-06-10 12:07:37.738176] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:44.045 [2024-06-10 12:07:37.738183] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:44.045 [2024-06-10 12:07:37.738190] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:44.045 [2024-06-10 12:07:37.738200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.045 [2024-06-10 12:07:37.747574] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:44.045 [2024-06-10 12:07:37.747959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.748390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.748401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227bb10 with addr=10.0.0.2, port=4420 00:29:44.045 [2024-06-10 12:07:37.748408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227bb10 is same with the state(5) to be set 00:29:44.045 [2024-06-10 12:07:37.748419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227bb10 (9): Bad file descriptor 00:29:44.045 [2024-06-10 12:07:37.748429] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:44.045 [2024-06-10 12:07:37.748435] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:44.045 [2024-06-10 12:07:37.748442] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:44.045 [2024-06-10 12:07:37.748452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.045 [2024-06-10 12:07:37.757624] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:44.045 [2024-06-10 12:07:37.757989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.758260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.758270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227bb10 with addr=10.0.0.2, port=4420 00:29:44.045 [2024-06-10 12:07:37.758277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227bb10 is same with the state(5) to be set 00:29:44.045 [2024-06-10 12:07:37.758288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227bb10 (9): Bad file descriptor 00:29:44.045 [2024-06-10 12:07:37.758298] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:44.045 [2024-06-10 12:07:37.758304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:44.045 [2024-06-10 12:07:37.758311] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:44.045 [2024-06-10 12:07:37.758321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.045 [2024-06-10 12:07:37.767674] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:44.045 [2024-06-10 12:07:37.768088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.768310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.045 [2024-06-10 12:07:37.768320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227bb10 with addr=10.0.0.2, port=4420 00:29:44.045 [2024-06-10 12:07:37.768327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227bb10 is same with the state(5) to be set 00:29:44.045 [2024-06-10 12:07:37.768339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227bb10 (9): Bad file descriptor 00:29:44.045 [2024-06-10 12:07:37.768349] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:44.045 [2024-06-10 12:07:37.768355] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:44.045 [2024-06-10 12:07:37.768362] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:44.045 [2024-06-10 12:07:37.768372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.045 [2024-06-10 12:07:37.776600] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:44.045 [2024-06-10 12:07:37.776621] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:44.987 12:07:38 -- host/discovery.sh@128 -- # get_subsystem_names 00:29:44.987 12:07:38 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:44.987 12:07:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.987 12:07:38 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:44.987 12:07:38 -- common/autotest_common.sh@10 -- # set +x 00:29:44.987 12:07:38 -- host/discovery.sh@59 -- # sort 00:29:44.987 12:07:38 -- host/discovery.sh@59 -- # xargs 00:29:44.987 12:07:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.987 12:07:38 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.987 12:07:38 -- host/discovery.sh@129 -- # get_bdev_list 00:29:44.987 12:07:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:44.987 12:07:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.987 12:07:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:44.987 12:07:38 -- host/discovery.sh@55 -- # sort 00:29:44.987 12:07:38 -- common/autotest_common.sh@10 -- # set +x 00:29:44.987 12:07:38 -- host/discovery.sh@55 -- # xargs 00:29:45.248 12:07:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.248 12:07:38 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:45.248 12:07:38 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:29:45.248 12:07:38 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:45.248 12:07:38 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:45.248 12:07:38 -- host/discovery.sh@63 -- # sort -n 00:29:45.248 12:07:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.248 12:07:38 -- host/discovery.sh@63 -- # xargs 00:29:45.248 12:07:38 -- common/autotest_common.sh@10 -- # set +x 00:29:45.248 12:07:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.248 12:07:38 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:29:45.248 12:07:38 -- host/discovery.sh@131 -- # get_notification_count 00:29:45.248 12:07:38 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:45.248 12:07:38 -- host/discovery.sh@74 -- # jq '. | length' 00:29:45.248 12:07:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.248 12:07:38 -- common/autotest_common.sh@10 -- # set +x 00:29:45.248 12:07:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.248 12:07:38 -- host/discovery.sh@74 -- # notification_count=0 00:29:45.248 12:07:38 -- host/discovery.sh@75 -- # notify_id=2 00:29:45.248 12:07:38 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:29:45.248 12:07:38 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:45.248 12:07:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.248 12:07:38 -- common/autotest_common.sh@10 -- # set +x 00:29:45.248 12:07:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.248 12:07:38 -- host/discovery.sh@135 -- # sleep 1 00:29:46.190 12:07:39 -- host/discovery.sh@136 -- # get_subsystem_names 00:29:46.191 12:07:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:46.191 12:07:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:46.191 12:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.191 12:07:39 -- host/discovery.sh@59 -- # sort 00:29:46.191 12:07:39 -- common/autotest_common.sh@10 -- # set +x 00:29:46.191 12:07:39 -- host/discovery.sh@59 -- # xargs 00:29:46.191 12:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.451 12:07:39 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:29:46.451 12:07:39 -- host/discovery.sh@137 -- # get_bdev_list 00:29:46.451 12:07:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:46.451 12:07:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:46.451 12:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.451 12:07:39 -- host/discovery.sh@55 -- # sort 00:29:46.451 12:07:39 -- common/autotest_common.sh@10 -- # set +x 00:29:46.452 12:07:39 -- host/discovery.sh@55 -- # xargs 00:29:46.452 12:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.452 12:07:40 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:29:46.452 12:07:40 -- host/discovery.sh@138 -- # get_notification_count 00:29:46.452 12:07:40 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:46.452 12:07:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.452 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:29:46.452 12:07:40 -- host/discovery.sh@74 -- # jq '. | length' 00:29:46.452 12:07:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.452 12:07:40 -- host/discovery.sh@74 -- # notification_count=2 00:29:46.452 12:07:40 -- host/discovery.sh@75 -- # notify_id=4 00:29:46.452 12:07:40 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:29:46.452 12:07:40 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:46.452 12:07:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.452 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:29:47.393 [2024-06-10 12:07:41.094912] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:47.393 [2024-06-10 12:07:41.094935] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:47.393 [2024-06-10 12:07:41.094948] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:47.655 [2024-06-10 12:07:41.184233] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:47.655 [2024-06-10 12:07:41.248320] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:47.655 [2024-06-10 12:07:41.248352] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:47.655 12:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.655 12:07:41 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:47.655 12:07:41 -- common/autotest_common.sh@640 -- # local es=0 00:29:47.655 12:07:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:47.655 12:07:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:47.655 12:07:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:47.655 12:07:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:47.655 12:07:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:47.655 12:07:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:47.655 12:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.655 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:29:47.655 request: 00:29:47.655 { 00:29:47.655 "name": "nvme", 00:29:47.655 "trtype": "tcp", 00:29:47.655 "traddr": "10.0.0.2", 00:29:47.655 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:47.655 "adrfam": "ipv4", 00:29:47.655 "trsvcid": "8009", 00:29:47.655 "wait_for_attach": true, 00:29:47.655 "method": "bdev_nvme_start_discovery", 00:29:47.655 "req_id": 1 00:29:47.655 } 00:29:47.655 Got JSON-RPC error response 00:29:47.655 response: 00:29:47.655 { 00:29:47.655 "code": -17, 00:29:47.655 "message": "File exists" 00:29:47.655 } 00:29:47.655 12:07:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:47.655 12:07:41 -- common/autotest_common.sh@643 -- # es=1 00:29:47.655 12:07:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:47.655 12:07:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:47.655 12:07:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:47.655 12:07:41 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:29:47.655 12:07:41 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:47.655 12:07:41 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:47.655 12:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.655 12:07:41 -- host/discovery.sh@67 -- # sort 00:29:47.655 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:29:47.655 12:07:41 -- host/discovery.sh@67 -- # xargs 00:29:47.655 12:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.655 12:07:41 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:29:47.655 12:07:41 -- host/discovery.sh@147 -- # get_bdev_list 00:29:47.655 12:07:41 -- host/discovery.sh@55 -- # sort 00:29:47.655 12:07:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:47.655 12:07:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:47.655 12:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.655 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:29:47.655 12:07:41 -- host/discovery.sh@55 -- # xargs 00:29:47.655 12:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.655 12:07:41 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:47.655 12:07:41 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:47.655 12:07:41 -- common/autotest_common.sh@640 -- # local es=0 00:29:47.655 12:07:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:47.655 12:07:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:47.655 12:07:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:47.655 12:07:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:47.655 12:07:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:47.655 12:07:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:47.655 12:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.655 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:29:47.655 request: 00:29:47.655 { 00:29:47.655 "name": "nvme_second", 00:29:47.655 "trtype": "tcp", 00:29:47.655 "traddr": "10.0.0.2", 00:29:47.655 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:47.655 "adrfam": "ipv4", 00:29:47.655 "trsvcid": "8009", 00:29:47.655 "wait_for_attach": true, 00:29:47.655 "method": "bdev_nvme_start_discovery", 00:29:47.655 "req_id": 1 00:29:47.655 } 00:29:47.655 Got JSON-RPC error response 00:29:47.655 response: 00:29:47.655 { 00:29:47.655 "code": -17, 00:29:47.655 "message": "File exists" 00:29:47.655 } 00:29:47.655 12:07:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:47.655 12:07:41 -- common/autotest_common.sh@643 -- # es=1 00:29:47.655 12:07:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:47.655 12:07:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:47.655 12:07:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:47.655 12:07:41 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:29:47.655 12:07:41 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:47.655 12:07:41 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:47.655 12:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.655 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:29:47.655 12:07:41 -- host/discovery.sh@67 -- # sort 00:29:47.655 12:07:41 -- host/discovery.sh@67 -- # xargs 00:29:47.655 12:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.937 12:07:41 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:29:47.937 12:07:41 -- host/discovery.sh@153 -- # get_bdev_list 00:29:47.937 12:07:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:47.937 12:07:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:47.937 12:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.937 12:07:41 -- host/discovery.sh@55 -- # sort 00:29:47.937 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:29:47.937 12:07:41 -- host/discovery.sh@55 -- # xargs 00:29:47.937 12:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.937 12:07:41 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:47.937 12:07:41 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:47.937 12:07:41 -- common/autotest_common.sh@640 -- # local es=0 00:29:47.937 12:07:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:47.937 12:07:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:47.937 12:07:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:47.937 12:07:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:47.937 12:07:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:47.937 12:07:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:47.937 12:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.937 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:29:48.979 [2024-06-10 12:07:42.515881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.979 [2024-06-10 12:07:42.516241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.979 [2024-06-10 12:07:42.516262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22714c0 with addr=10.0.0.2, port=8010 00:29:48.979 [2024-06-10 12:07:42.516276] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:48.979 [2024-06-10 12:07:42.516283] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:48.979 [2024-06-10 12:07:42.516290] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:49.922 [2024-06-10 12:07:43.518265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.922 [2024-06-10 12:07:43.518669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.922 [2024-06-10 12:07:43.518680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22714c0 with addr=10.0.0.2, port=8010 00:29:49.922 [2024-06-10 12:07:43.518691] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:49.922 [2024-06-10 12:07:43.518697] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:49.922 [2024-06-10 12:07:43.518704] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:50.865 [2024-06-10 12:07:44.520194] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:50.865 request: 00:29:50.865 { 00:29:50.865 "name": "nvme_second", 00:29:50.865 "trtype": "tcp", 00:29:50.865 "traddr": "10.0.0.2", 00:29:50.865 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:50.865 "adrfam": "ipv4", 00:29:50.865 "trsvcid": "8010", 00:29:50.865 "attach_timeout_ms": 3000, 00:29:50.865 "method": "bdev_nvme_start_discovery", 00:29:50.865 "req_id": 1 00:29:50.865 } 00:29:50.865 Got JSON-RPC error response 00:29:50.865 response: 00:29:50.865 { 00:29:50.865 "code": -110, 00:29:50.865 "message": "Connection timed out" 00:29:50.865 } 00:29:50.865 12:07:44 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:50.865 12:07:44 -- common/autotest_common.sh@643 -- # es=1 00:29:50.865 12:07:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:50.865 12:07:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:50.865 12:07:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:50.865 12:07:44 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:29:50.865 12:07:44 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:50.865 12:07:44 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:50.865 12:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:50.865 12:07:44 -- common/autotest_common.sh@10 -- # set +x 00:29:50.865 12:07:44 -- host/discovery.sh@67 -- # sort 00:29:50.865 12:07:44 -- host/discovery.sh@67 -- # xargs 00:29:50.865 12:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:50.865 12:07:44 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:29:50.865 12:07:44 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:29:50.865 12:07:44 -- host/discovery.sh@162 -- # kill 2124225 00:29:50.865 12:07:44 -- host/discovery.sh@163 -- # nvmftestfini 00:29:50.865 12:07:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:50.865 12:07:44 -- nvmf/common.sh@116 -- # sync 00:29:50.865 12:07:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:50.865 12:07:44 -- nvmf/common.sh@119 -- # set +e 00:29:50.865 12:07:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:50.865 12:07:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:50.865 rmmod nvme_tcp 00:29:50.865 rmmod nvme_fabrics 00:29:50.865 rmmod nvme_keyring 00:29:51.126 12:07:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:51.126 12:07:44 -- nvmf/common.sh@123 -- # set -e 00:29:51.126 12:07:44 -- nvmf/common.sh@124 -- # return 0 00:29:51.126 12:07:44 -- nvmf/common.sh@477 -- # '[' -n 2123987 ']' 00:29:51.126 12:07:44 -- nvmf/common.sh@478 -- # killprocess 2123987 00:29:51.126 12:07:44 -- common/autotest_common.sh@926 -- # '[' -z 2123987 ']' 00:29:51.126 12:07:44 -- common/autotest_common.sh@930 -- # kill -0 2123987 00:29:51.126 12:07:44 -- common/autotest_common.sh@931 -- # uname 00:29:51.126 12:07:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:51.126 12:07:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2123987 00:29:51.126 12:07:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:51.126 12:07:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:51.126 12:07:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2123987' 00:29:51.126 killing process with pid 2123987 00:29:51.126 12:07:44 -- common/autotest_common.sh@945 -- # kill 2123987 00:29:51.126 12:07:44 -- common/autotest_common.sh@950 -- # wait 2123987 00:29:51.126 12:07:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:51.126 12:07:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:51.126 12:07:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:51.126 12:07:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:51.126 12:07:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:51.126 12:07:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.126 12:07:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:51.126 12:07:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.672 12:07:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:53.672 00:29:53.672 real 0m22.082s 00:29:53.672 user 0m28.209s 00:29:53.672 sys 0m6.490s 00:29:53.672 12:07:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.672 12:07:46 -- common/autotest_common.sh@10 -- # set +x 00:29:53.672 ************************************ 00:29:53.672 END TEST nvmf_discovery 00:29:53.672 ************************************ 00:29:53.672 12:07:46 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:53.672 12:07:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:53.672 12:07:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:53.672 12:07:46 -- common/autotest_common.sh@10 -- # set +x 00:29:53.672 ************************************ 00:29:53.672 START TEST nvmf_discovery_remove_ifc 00:29:53.672 ************************************ 00:29:53.672 12:07:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:53.672 * Looking for test storage... 00:29:53.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:53.672 12:07:47 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.672 12:07:47 -- nvmf/common.sh@7 -- # uname -s 00:29:53.672 12:07:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.672 12:07:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.672 12:07:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.672 12:07:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.672 12:07:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.672 12:07:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.672 12:07:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.672 12:07:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.672 12:07:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.672 12:07:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.672 12:07:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:53.672 12:07:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:53.672 12:07:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.672 12:07:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.672 12:07:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.672 12:07:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.672 12:07:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.672 12:07:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.672 12:07:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.672 12:07:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.672 12:07:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.672 12:07:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.672 12:07:47 -- paths/export.sh@5 -- # export PATH 00:29:53.672 12:07:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.672 12:07:47 -- nvmf/common.sh@46 -- # : 0 00:29:53.672 12:07:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:53.672 12:07:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:53.672 12:07:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:53.672 12:07:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.672 12:07:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.672 12:07:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:53.672 12:07:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:53.672 12:07:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:53.672 12:07:47 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:53.672 12:07:47 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:53.672 12:07:47 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:53.672 12:07:47 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:53.672 12:07:47 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:53.672 12:07:47 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:53.672 12:07:47 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:53.672 12:07:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:53.672 12:07:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.672 12:07:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:53.672 12:07:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:53.672 12:07:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:53.672 12:07:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.672 12:07:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:53.672 12:07:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.672 12:07:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:53.672 12:07:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:53.672 12:07:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:53.672 12:07:47 -- common/autotest_common.sh@10 -- # set +x 00:30:01.815 12:07:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:01.815 12:07:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:01.815 12:07:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:01.815 12:07:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:01.815 12:07:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:01.815 12:07:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:01.815 12:07:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:01.815 12:07:54 -- nvmf/common.sh@294 -- # net_devs=() 00:30:01.815 12:07:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:01.815 12:07:54 -- nvmf/common.sh@295 -- # e810=() 00:30:01.815 12:07:54 -- nvmf/common.sh@295 -- # local -ga e810 00:30:01.815 12:07:54 -- nvmf/common.sh@296 -- # x722=() 00:30:01.815 12:07:54 -- nvmf/common.sh@296 -- # local -ga x722 00:30:01.815 12:07:54 -- nvmf/common.sh@297 -- # mlx=() 00:30:01.815 12:07:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:01.815 12:07:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.815 12:07:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:01.815 12:07:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:01.815 12:07:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:01.815 12:07:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:01.815 12:07:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:01.815 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:01.815 12:07:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:01.815 12:07:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:01.815 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:01.815 12:07:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:01.815 12:07:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:01.815 12:07:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.815 12:07:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:01.815 12:07:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.815 12:07:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:01.815 Found net devices under 0000:31:00.0: cvl_0_0 00:30:01.815 12:07:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.815 12:07:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:01.815 12:07:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.815 12:07:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:01.815 12:07:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.815 12:07:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:01.815 Found net devices under 0000:31:00.1: cvl_0_1 00:30:01.815 12:07:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.815 12:07:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:01.815 12:07:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:01.815 12:07:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:01.815 12:07:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.815 12:07:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.815 12:07:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.815 12:07:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:01.815 12:07:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.815 12:07:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.815 12:07:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:01.815 12:07:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.815 12:07:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.815 12:07:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:01.815 12:07:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:01.815 12:07:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.815 12:07:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.815 12:07:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.815 12:07:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.815 12:07:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:01.815 12:07:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.815 12:07:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.815 12:07:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.815 12:07:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:01.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:30:01.815 00:30:01.815 --- 10.0.0.2 ping statistics --- 00:30:01.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.815 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:30:01.815 12:07:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:30:01.815 00:30:01.815 --- 10.0.0.1 ping statistics --- 00:30:01.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.815 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:30:01.815 12:07:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.815 12:07:54 -- nvmf/common.sh@410 -- # return 0 00:30:01.815 12:07:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:01.815 12:07:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.815 12:07:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:01.815 12:07:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.815 12:07:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:01.815 12:07:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:01.815 12:07:54 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:01.815 12:07:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:01.815 12:07:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:01.815 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:30:01.815 12:07:54 -- nvmf/common.sh@469 -- # nvmfpid=2130832 00:30:01.815 12:07:54 -- nvmf/common.sh@470 -- # waitforlisten 2130832 00:30:01.815 12:07:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:01.815 12:07:54 -- common/autotest_common.sh@819 -- # '[' -z 2130832 ']' 00:30:01.815 12:07:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.815 12:07:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:01.815 12:07:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.815 12:07:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:01.815 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:30:01.815 [2024-06-10 12:07:54.508263] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:01.815 [2024-06-10 12:07:54.508325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.815 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.816 [2024-06-10 12:07:54.595945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.816 [2024-06-10 12:07:54.687184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:01.816 [2024-06-10 12:07:54.687340] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.816 [2024-06-10 12:07:54.687350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.816 [2024-06-10 12:07:54.687358] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.816 [2024-06-10 12:07:54.687393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.816 12:07:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:01.816 12:07:55 -- common/autotest_common.sh@852 -- # return 0 00:30:01.816 12:07:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:01.816 12:07:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.816 12:07:55 -- common/autotest_common.sh@10 -- # set +x 00:30:01.816 12:07:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.816 12:07:55 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:01.816 12:07:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.816 12:07:55 -- common/autotest_common.sh@10 -- # set +x 00:30:01.816 [2024-06-10 12:07:55.343409] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.816 [2024-06-10 12:07:55.351607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:01.816 null0 00:30:01.816 [2024-06-10 12:07:55.383606] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.816 12:07:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.816 12:07:55 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2131179 00:30:01.816 12:07:55 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2131179 /tmp/host.sock 00:30:01.816 12:07:55 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:01.816 12:07:55 -- common/autotest_common.sh@819 -- # '[' -z 2131179 ']' 00:30:01.816 12:07:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:01.816 12:07:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:01.816 12:07:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:01.816 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:01.816 12:07:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:01.816 12:07:55 -- common/autotest_common.sh@10 -- # set +x 00:30:01.816 [2024-06-10 12:07:55.452443] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:01.816 [2024-06-10 12:07:55.452501] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131179 ] 00:30:01.816 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.816 [2024-06-10 12:07:55.516608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.076 [2024-06-10 12:07:55.588729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:02.076 [2024-06-10 12:07:55.588876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.647 12:07:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:02.647 12:07:56 -- common/autotest_common.sh@852 -- # return 0 00:30:02.647 12:07:56 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:02.647 12:07:56 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:02.647 12:07:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:02.647 12:07:56 -- common/autotest_common.sh@10 -- # set +x 00:30:02.647 12:07:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:02.647 12:07:56 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:02.647 12:07:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:02.647 12:07:56 -- common/autotest_common.sh@10 -- # set +x 00:30:02.647 12:07:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:02.647 12:07:56 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:02.647 12:07:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:02.647 12:07:56 -- common/autotest_common.sh@10 -- # set +x 00:30:03.591 [2024-06-10 12:07:57.322868] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:03.591 [2024-06-10 12:07:57.322891] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:03.591 [2024-06-10 12:07:57.322904] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:03.851 [2024-06-10 12:07:57.411171] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:04.112 [2024-06-10 12:07:57.638159] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:04.112 [2024-06-10 12:07:57.638202] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:04.112 [2024-06-10 12:07:57.638224] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:04.112 [2024-06-10 12:07:57.638238] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:04.112 [2024-06-10 12:07:57.638264] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:04.112 12:07:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:04.112 [2024-06-10 12:07:57.641396] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11a3180 was disconnected and freed. delete nvme_qpair. 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:04.112 12:07:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:04.112 12:07:57 -- common/autotest_common.sh@10 -- # set +x 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:04.112 12:07:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:04.112 12:07:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:04.112 12:07:57 -- common/autotest_common.sh@10 -- # set +x 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:04.112 12:07:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:04.112 12:07:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:05.496 12:07:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:05.496 12:07:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:05.496 12:07:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:05.496 12:07:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.496 12:07:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:05.496 12:07:58 -- common/autotest_common.sh@10 -- # set +x 00:30:05.496 12:07:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:05.496 12:07:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.496 12:07:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:05.496 12:07:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:06.438 12:07:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:06.438 12:07:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.438 12:07:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:06.438 12:07:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.438 12:07:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:06.438 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:30:06.438 12:07:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:06.438 12:07:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.438 12:07:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:06.438 12:07:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:07.380 12:08:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:07.380 12:08:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:07.380 12:08:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:07.380 12:08:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:07.380 12:08:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:07.380 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:30:07.380 12:08:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:07.380 12:08:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:07.380 12:08:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:07.380 12:08:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:08.321 12:08:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:08.321 12:08:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:08.321 12:08:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:08.321 12:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.321 12:08:02 -- common/autotest_common.sh@10 -- # set +x 00:30:08.321 12:08:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:08.321 12:08:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:08.321 12:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.581 12:08:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:08.581 12:08:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:09.522 [2024-06-10 12:08:03.078749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:09.522 [2024-06-10 12:08:03.078793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.523 [2024-06-10 12:08:03.078805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.523 [2024-06-10 12:08:03.078814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.523 [2024-06-10 12:08:03.078822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.523 [2024-06-10 12:08:03.078830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.523 [2024-06-10 12:08:03.078837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.523 [2024-06-10 12:08:03.078844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.523 [2024-06-10 12:08:03.078851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.523 [2024-06-10 12:08:03.078859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.523 [2024-06-10 12:08:03.078866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.523 [2024-06-10 12:08:03.078873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11697a0 is same with the state(5) to be set 00:30:09.523 [2024-06-10 12:08:03.088770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11697a0 (9): Bad file descriptor 00:30:09.523 [2024-06-10 12:08:03.098812] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:09.523 12:08:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:09.523 12:08:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:09.523 12:08:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:09.523 12:08:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:09.523 12:08:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:09.523 12:08:03 -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 12:08:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:10.464 [2024-06-10 12:08:04.145267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:11.405 [2024-06-10 12:08:05.169273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:11.405 [2024-06-10 12:08:05.169310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11697a0 with addr=10.0.0.2, port=4420 00:30:11.405 [2024-06-10 12:08:05.169326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11697a0 is same with the state(5) to be set 00:30:11.405 [2024-06-10 12:08:05.169656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11697a0 (9): Bad file descriptor 00:30:11.405 [2024-06-10 12:08:05.169677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.405 [2024-06-10 12:08:05.169698] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:11.405 [2024-06-10 12:08:05.169720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.405 [2024-06-10 12:08:05.169729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.405 [2024-06-10 12:08:05.169739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.405 [2024-06-10 12:08:05.169746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.405 [2024-06-10 12:08:05.169754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.405 [2024-06-10 12:08:05.169761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.405 [2024-06-10 12:08:05.169769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.405 [2024-06-10 12:08:05.169776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.405 [2024-06-10 12:08:05.169784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.405 [2024-06-10 12:08:05.169791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.405 [2024-06-10 12:08:05.169798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:11.405 [2024-06-10 12:08:05.170347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1169bb0 (9): Bad file descriptor 00:30:11.405 [2024-06-10 12:08:05.171358] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:11.405 [2024-06-10 12:08:05.171370] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:11.666 12:08:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.666 12:08:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:11.666 12:08:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:12.609 12:08:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:12.609 12:08:06 -- common/autotest_common.sh@10 -- # set +x 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:12.609 12:08:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:12.609 12:08:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:12.609 12:08:06 -- common/autotest_common.sh@10 -- # set +x 00:30:12.609 12:08:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:12.609 12:08:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.869 12:08:06 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:12.869 12:08:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:13.438 [2024-06-10 12:08:07.187116] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:13.438 [2024-06-10 12:08:07.187137] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:13.438 [2024-06-10 12:08:07.187150] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:13.698 [2024-06-10 12:08:07.315595] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:13.698 [2024-06-10 12:08:07.376340] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:13.698 [2024-06-10 12:08:07.376375] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:13.698 [2024-06-10 12:08:07.376393] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:13.698 [2024-06-10 12:08:07.376408] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:13.698 [2024-06-10 12:08:07.376416] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:13.698 [2024-06-10 12:08:07.385319] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11acb90 was disconnected and freed. delete nvme_qpair. 00:30:13.698 12:08:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:13.698 12:08:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:13.698 12:08:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:13.698 12:08:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:13.698 12:08:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.698 12:08:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:13.698 12:08:07 -- common/autotest_common.sh@10 -- # set +x 00:30:13.698 12:08:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.698 12:08:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:13.698 12:08:07 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:13.698 12:08:07 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2131179 00:30:13.698 12:08:07 -- common/autotest_common.sh@926 -- # '[' -z 2131179 ']' 00:30:13.698 12:08:07 -- common/autotest_common.sh@930 -- # kill -0 2131179 00:30:13.698 12:08:07 -- common/autotest_common.sh@931 -- # uname 00:30:13.698 12:08:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:13.959 12:08:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2131179 00:30:13.959 12:08:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:13.959 12:08:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:13.959 12:08:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2131179' 00:30:13.959 killing process with pid 2131179 00:30:13.959 12:08:07 -- common/autotest_common.sh@945 -- # kill 2131179 00:30:13.959 12:08:07 -- common/autotest_common.sh@950 -- # wait 2131179 00:30:13.959 12:08:07 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:13.959 12:08:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:13.959 12:08:07 -- nvmf/common.sh@116 -- # sync 00:30:13.959 12:08:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:13.959 12:08:07 -- nvmf/common.sh@119 -- # set +e 00:30:13.959 12:08:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:13.959 12:08:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:13.959 rmmod nvme_tcp 00:30:13.959 rmmod nvme_fabrics 00:30:13.959 rmmod nvme_keyring 00:30:13.959 12:08:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:13.959 12:08:07 -- nvmf/common.sh@123 -- # set -e 00:30:13.959 12:08:07 -- nvmf/common.sh@124 -- # return 0 00:30:13.959 12:08:07 -- nvmf/common.sh@477 -- # '[' -n 2130832 ']' 00:30:13.959 12:08:07 -- nvmf/common.sh@478 -- # killprocess 2130832 00:30:13.959 12:08:07 -- common/autotest_common.sh@926 -- # '[' -z 2130832 ']' 00:30:13.959 12:08:07 -- common/autotest_common.sh@930 -- # kill -0 2130832 00:30:13.959 12:08:07 -- common/autotest_common.sh@931 -- # uname 00:30:13.959 12:08:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:13.959 12:08:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2130832 00:30:14.219 12:08:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:14.219 12:08:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:14.219 12:08:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2130832' 00:30:14.219 killing process with pid 2130832 00:30:14.219 12:08:07 -- common/autotest_common.sh@945 -- # kill 2130832 00:30:14.219 12:08:07 -- common/autotest_common.sh@950 -- # wait 2130832 00:30:14.219 12:08:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:14.219 12:08:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:14.219 12:08:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:14.219 12:08:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:14.219 12:08:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:14.219 12:08:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.219 12:08:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:14.219 12:08:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.762 12:08:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:16.762 00:30:16.762 real 0m23.009s 00:30:16.762 user 0m26.094s 00:30:16.762 sys 0m6.661s 00:30:16.762 12:08:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.762 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:30:16.762 ************************************ 00:30:16.762 END TEST nvmf_discovery_remove_ifc 00:30:16.762 ************************************ 00:30:16.762 12:08:09 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:30:16.762 12:08:09 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:16.762 12:08:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:16.762 12:08:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:16.762 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:30:16.762 ************************************ 00:30:16.762 START TEST nvmf_digest 00:30:16.762 ************************************ 00:30:16.762 12:08:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:16.762 * Looking for test storage... 00:30:16.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:16.762 12:08:10 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.762 12:08:10 -- nvmf/common.sh@7 -- # uname -s 00:30:16.762 12:08:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.762 12:08:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.762 12:08:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.762 12:08:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.762 12:08:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.762 12:08:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.762 12:08:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.762 12:08:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.762 12:08:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.762 12:08:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.762 12:08:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:16.762 12:08:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:16.762 12:08:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.762 12:08:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.762 12:08:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.762 12:08:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.762 12:08:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.762 12:08:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.762 12:08:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.763 12:08:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.763 12:08:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.763 12:08:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.763 12:08:10 -- paths/export.sh@5 -- # export PATH 00:30:16.763 12:08:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.763 12:08:10 -- nvmf/common.sh@46 -- # : 0 00:30:16.763 12:08:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:16.763 12:08:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:16.763 12:08:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:16.763 12:08:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.763 12:08:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.763 12:08:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:16.763 12:08:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:16.763 12:08:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:16.763 12:08:10 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:16.763 12:08:10 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:16.763 12:08:10 -- host/digest.sh@16 -- # runtime=2 00:30:16.763 12:08:10 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:30:16.763 12:08:10 -- host/digest.sh@132 -- # nvmftestinit 00:30:16.763 12:08:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:16.763 12:08:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.763 12:08:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:16.763 12:08:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:16.763 12:08:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:16.763 12:08:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.763 12:08:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:16.763 12:08:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.763 12:08:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:16.763 12:08:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:16.763 12:08:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:16.763 12:08:10 -- common/autotest_common.sh@10 -- # set +x 00:30:23.350 12:08:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:23.351 12:08:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:23.351 12:08:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:23.351 12:08:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:23.351 12:08:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:23.351 12:08:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:23.351 12:08:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:23.351 12:08:16 -- nvmf/common.sh@294 -- # net_devs=() 00:30:23.351 12:08:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:23.351 12:08:16 -- nvmf/common.sh@295 -- # e810=() 00:30:23.351 12:08:16 -- nvmf/common.sh@295 -- # local -ga e810 00:30:23.351 12:08:16 -- nvmf/common.sh@296 -- # x722=() 00:30:23.351 12:08:16 -- nvmf/common.sh@296 -- # local -ga x722 00:30:23.351 12:08:16 -- nvmf/common.sh@297 -- # mlx=() 00:30:23.351 12:08:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:23.351 12:08:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.351 12:08:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:23.351 12:08:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:23.351 12:08:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:23.351 12:08:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:23.351 12:08:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:23.351 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:23.351 12:08:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:23.351 12:08:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:23.351 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:23.351 12:08:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:23.351 12:08:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:23.351 12:08:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.351 12:08:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:23.351 12:08:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.351 12:08:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:23.351 Found net devices under 0000:31:00.0: cvl_0_0 00:30:23.351 12:08:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.351 12:08:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:23.351 12:08:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.351 12:08:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:23.351 12:08:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.351 12:08:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:23.351 Found net devices under 0000:31:00.1: cvl_0_1 00:30:23.351 12:08:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.351 12:08:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:23.351 12:08:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:23.351 12:08:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:23.351 12:08:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.351 12:08:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.351 12:08:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.351 12:08:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:23.351 12:08:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.351 12:08:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.351 12:08:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:23.351 12:08:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.351 12:08:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.351 12:08:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:23.351 12:08:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:23.351 12:08:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.351 12:08:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.351 12:08:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.351 12:08:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.351 12:08:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:23.351 12:08:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.351 12:08:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.351 12:08:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.351 12:08:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:23.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:30:23.351 00:30:23.351 --- 10.0.0.2 ping statistics --- 00:30:23.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.351 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:30:23.351 12:08:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:30:23.351 00:30:23.351 --- 10.0.0.1 ping statistics --- 00:30:23.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.351 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:23.351 12:08:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.351 12:08:16 -- nvmf/common.sh@410 -- # return 0 00:30:23.351 12:08:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:23.351 12:08:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.351 12:08:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:23.351 12:08:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.351 12:08:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:23.351 12:08:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:23.351 12:08:17 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:23.351 12:08:17 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:30:23.351 12:08:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:23.351 12:08:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:23.351 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:30:23.351 ************************************ 00:30:23.351 START TEST nvmf_digest_clean 00:30:23.351 ************************************ 00:30:23.351 12:08:17 -- common/autotest_common.sh@1104 -- # run_digest 00:30:23.351 12:08:17 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:30:23.351 12:08:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:23.351 12:08:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:23.351 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:30:23.351 12:08:17 -- nvmf/common.sh@469 -- # nvmfpid=2137692 00:30:23.351 12:08:17 -- nvmf/common.sh@470 -- # waitforlisten 2137692 00:30:23.351 12:08:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:23.351 12:08:17 -- common/autotest_common.sh@819 -- # '[' -z 2137692 ']' 00:30:23.351 12:08:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.351 12:08:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:23.351 12:08:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.351 12:08:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:23.351 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:30:23.351 [2024-06-10 12:08:17.074891] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:23.351 [2024-06-10 12:08:17.074938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.351 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.613 [2024-06-10 12:08:17.141017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.613 [2024-06-10 12:08:17.203301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:23.613 [2024-06-10 12:08:17.203421] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.613 [2024-06-10 12:08:17.203429] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.613 [2024-06-10 12:08:17.203436] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.613 [2024-06-10 12:08:17.203460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.185 12:08:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:24.185 12:08:17 -- common/autotest_common.sh@852 -- # return 0 00:30:24.185 12:08:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:24.185 12:08:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:24.185 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:30:24.186 12:08:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.186 12:08:17 -- host/digest.sh@120 -- # common_target_config 00:30:24.186 12:08:17 -- host/digest.sh@43 -- # rpc_cmd 00:30:24.186 12:08:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.186 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:30:24.186 null0 00:30:24.186 [2024-06-10 12:08:17.954285] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.448 [2024-06-10 12:08:17.978484] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.448 12:08:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.448 12:08:17 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:30:24.448 12:08:17 -- host/digest.sh@77 -- # local rw bs qd 00:30:24.448 12:08:17 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:24.448 12:08:17 -- host/digest.sh@80 -- # rw=randread 00:30:24.448 12:08:17 -- host/digest.sh@80 -- # bs=4096 00:30:24.448 12:08:17 -- host/digest.sh@80 -- # qd=128 00:30:24.448 12:08:17 -- host/digest.sh@82 -- # bperfpid=2137829 00:30:24.448 12:08:17 -- host/digest.sh@83 -- # waitforlisten 2137829 /var/tmp/bperf.sock 00:30:24.448 12:08:17 -- common/autotest_common.sh@819 -- # '[' -z 2137829 ']' 00:30:24.448 12:08:17 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:24.448 12:08:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:24.448 12:08:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:24.448 12:08:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:24.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:24.448 12:08:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:24.448 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:30:24.448 [2024-06-10 12:08:18.027372] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:24.448 [2024-06-10 12:08:18.027417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2137829 ] 00:30:24.448 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.448 [2024-06-10 12:08:18.103439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.448 [2024-06-10 12:08:18.165771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.020 12:08:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:25.020 12:08:18 -- common/autotest_common.sh@852 -- # return 0 00:30:25.020 12:08:18 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:25.020 12:08:18 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:25.020 12:08:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:25.281 12:08:18 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.281 12:08:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.543 nvme0n1 00:30:25.804 12:08:19 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:25.804 12:08:19 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:25.804 Running I/O for 2 seconds... 00:30:27.718 00:30:27.719 Latency(us) 00:30:27.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.719 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:27.719 nvme0n1 : 2.00 22224.14 86.81 0.00 0.00 5752.51 2525.87 15947.09 00:30:27.719 =================================================================================================================== 00:30:27.719 Total : 22224.14 86.81 0.00 0.00 5752.51 2525.87 15947.09 00:30:27.719 0 00:30:27.719 12:08:21 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:27.719 12:08:21 -- host/digest.sh@92 -- # get_accel_stats 00:30:27.719 12:08:21 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:27.719 12:08:21 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:27.719 | select(.opcode=="crc32c") 00:30:27.719 | "\(.module_name) \(.executed)"' 00:30:27.719 12:08:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:28.003 12:08:21 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:28.003 12:08:21 -- host/digest.sh@93 -- # exp_module=software 00:30:28.003 12:08:21 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:28.003 12:08:21 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:28.003 12:08:21 -- host/digest.sh@97 -- # killprocess 2137829 00:30:28.003 12:08:21 -- common/autotest_common.sh@926 -- # '[' -z 2137829 ']' 00:30:28.003 12:08:21 -- common/autotest_common.sh@930 -- # kill -0 2137829 00:30:28.003 12:08:21 -- common/autotest_common.sh@931 -- # uname 00:30:28.003 12:08:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:28.003 12:08:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2137829 00:30:28.003 12:08:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:28.003 12:08:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:28.003 12:08:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2137829' 00:30:28.003 killing process with pid 2137829 00:30:28.003 12:08:21 -- common/autotest_common.sh@945 -- # kill 2137829 00:30:28.003 Received shutdown signal, test time was about 2.000000 seconds 00:30:28.003 00:30:28.003 Latency(us) 00:30:28.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.003 =================================================================================================================== 00:30:28.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.003 12:08:21 -- common/autotest_common.sh@950 -- # wait 2137829 00:30:28.295 12:08:21 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:30:28.295 12:08:21 -- host/digest.sh@77 -- # local rw bs qd 00:30:28.295 12:08:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:28.295 12:08:21 -- host/digest.sh@80 -- # rw=randread 00:30:28.295 12:08:21 -- host/digest.sh@80 -- # bs=131072 00:30:28.295 12:08:21 -- host/digest.sh@80 -- # qd=16 00:30:28.295 12:08:21 -- host/digest.sh@82 -- # bperfpid=2138643 00:30:28.295 12:08:21 -- host/digest.sh@83 -- # waitforlisten 2138643 /var/tmp/bperf.sock 00:30:28.295 12:08:21 -- common/autotest_common.sh@819 -- # '[' -z 2138643 ']' 00:30:28.295 12:08:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:28.295 12:08:21 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:28.295 12:08:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:28.295 12:08:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:28.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:28.295 12:08:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:28.295 12:08:21 -- common/autotest_common.sh@10 -- # set +x 00:30:28.295 [2024-06-10 12:08:21.837890] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:28.295 [2024-06-10 12:08:21.837956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138643 ] 00:30:28.295 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:28.295 Zero copy mechanism will not be used. 00:30:28.295 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.295 [2024-06-10 12:08:21.914566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.295 [2024-06-10 12:08:21.976826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.867 12:08:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:28.867 12:08:22 -- common/autotest_common.sh@852 -- # return 0 00:30:28.867 12:08:22 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:28.867 12:08:22 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:28.867 12:08:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:29.128 12:08:22 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.128 12:08:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.389 nvme0n1 00:30:29.389 12:08:23 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:29.389 12:08:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:29.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:29.650 Zero copy mechanism will not be used. 00:30:29.650 Running I/O for 2 seconds... 00:30:31.562 00:30:31.562 Latency(us) 00:30:31.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.562 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:31.562 nvme0n1 : 2.04 2682.61 335.33 0.00 0.00 5848.12 2621.44 45001.39 00:30:31.562 =================================================================================================================== 00:30:31.562 Total : 2682.61 335.33 0.00 0.00 5848.12 2621.44 45001.39 00:30:31.562 0 00:30:31.562 12:08:25 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:31.562 12:08:25 -- host/digest.sh@92 -- # get_accel_stats 00:30:31.562 12:08:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:31.562 12:08:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:31.562 12:08:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:31.562 | select(.opcode=="crc32c") 00:30:31.562 | "\(.module_name) \(.executed)"' 00:30:31.824 12:08:25 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:31.824 12:08:25 -- host/digest.sh@93 -- # exp_module=software 00:30:31.824 12:08:25 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:31.824 12:08:25 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:31.824 12:08:25 -- host/digest.sh@97 -- # killprocess 2138643 00:30:31.824 12:08:25 -- common/autotest_common.sh@926 -- # '[' -z 2138643 ']' 00:30:31.824 12:08:25 -- common/autotest_common.sh@930 -- # kill -0 2138643 00:30:31.824 12:08:25 -- common/autotest_common.sh@931 -- # uname 00:30:31.824 12:08:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:31.824 12:08:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2138643 00:30:31.824 12:08:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:31.824 12:08:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:31.824 12:08:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2138643' 00:30:31.824 killing process with pid 2138643 00:30:31.824 12:08:25 -- common/autotest_common.sh@945 -- # kill 2138643 00:30:31.824 Received shutdown signal, test time was about 2.000000 seconds 00:30:31.824 00:30:31.824 Latency(us) 00:30:31.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.824 =================================================================================================================== 00:30:31.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:31.824 12:08:25 -- common/autotest_common.sh@950 -- # wait 2138643 00:30:32.085 12:08:25 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:30:32.085 12:08:25 -- host/digest.sh@77 -- # local rw bs qd 00:30:32.085 12:08:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:32.085 12:08:25 -- host/digest.sh@80 -- # rw=randwrite 00:30:32.085 12:08:25 -- host/digest.sh@80 -- # bs=4096 00:30:32.085 12:08:25 -- host/digest.sh@80 -- # qd=128 00:30:32.085 12:08:25 -- host/digest.sh@82 -- # bperfpid=2139437 00:30:32.085 12:08:25 -- host/digest.sh@83 -- # waitforlisten 2139437 /var/tmp/bperf.sock 00:30:32.085 12:08:25 -- common/autotest_common.sh@819 -- # '[' -z 2139437 ']' 00:30:32.085 12:08:25 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:32.085 12:08:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:32.085 12:08:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:32.085 12:08:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:32.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:32.085 12:08:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:32.085 12:08:25 -- common/autotest_common.sh@10 -- # set +x 00:30:32.085 [2024-06-10 12:08:25.664464] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:32.085 [2024-06-10 12:08:25.664522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139437 ] 00:30:32.085 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.085 [2024-06-10 12:08:25.740156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.085 [2024-06-10 12:08:25.791542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.655 12:08:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:32.655 12:08:26 -- common/autotest_common.sh@852 -- # return 0 00:30:32.655 12:08:26 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:32.655 12:08:26 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:32.655 12:08:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:32.915 12:08:26 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.915 12:08:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:33.175 nvme0n1 00:30:33.175 12:08:26 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:33.175 12:08:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:33.175 Running I/O for 2 seconds... 00:30:35.716 00:30:35.717 Latency(us) 00:30:35.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.717 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:35.717 nvme0n1 : 2.00 22569.74 88.16 0.00 0.00 5666.13 2785.28 15837.87 00:30:35.717 =================================================================================================================== 00:30:35.717 Total : 22569.74 88.16 0.00 0.00 5666.13 2785.28 15837.87 00:30:35.717 0 00:30:35.717 12:08:28 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:35.717 12:08:28 -- host/digest.sh@92 -- # get_accel_stats 00:30:35.717 12:08:28 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:35.717 12:08:28 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:35.717 | select(.opcode=="crc32c") 00:30:35.717 | "\(.module_name) \(.executed)"' 00:30:35.717 12:08:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:35.717 12:08:29 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:35.717 12:08:29 -- host/digest.sh@93 -- # exp_module=software 00:30:35.717 12:08:29 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:35.717 12:08:29 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:35.717 12:08:29 -- host/digest.sh@97 -- # killprocess 2139437 00:30:35.717 12:08:29 -- common/autotest_common.sh@926 -- # '[' -z 2139437 ']' 00:30:35.717 12:08:29 -- common/autotest_common.sh@930 -- # kill -0 2139437 00:30:35.717 12:08:29 -- common/autotest_common.sh@931 -- # uname 00:30:35.717 12:08:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:35.717 12:08:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2139437 00:30:35.717 12:08:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:35.717 12:08:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:35.717 12:08:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2139437' 00:30:35.717 killing process with pid 2139437 00:30:35.717 12:08:29 -- common/autotest_common.sh@945 -- # kill 2139437 00:30:35.717 Received shutdown signal, test time was about 2.000000 seconds 00:30:35.717 00:30:35.717 Latency(us) 00:30:35.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.717 =================================================================================================================== 00:30:35.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:35.717 12:08:29 -- common/autotest_common.sh@950 -- # wait 2139437 00:30:35.717 12:08:29 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:30:35.717 12:08:29 -- host/digest.sh@77 -- # local rw bs qd 00:30:35.717 12:08:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:35.717 12:08:29 -- host/digest.sh@80 -- # rw=randwrite 00:30:35.717 12:08:29 -- host/digest.sh@80 -- # bs=131072 00:30:35.717 12:08:29 -- host/digest.sh@80 -- # qd=16 00:30:35.717 12:08:29 -- host/digest.sh@82 -- # bperfpid=2140123 00:30:35.717 12:08:29 -- host/digest.sh@83 -- # waitforlisten 2140123 /var/tmp/bperf.sock 00:30:35.717 12:08:29 -- common/autotest_common.sh@819 -- # '[' -z 2140123 ']' 00:30:35.717 12:08:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:35.717 12:08:29 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:35.717 12:08:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:35.717 12:08:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:35.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:35.717 12:08:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:35.717 12:08:29 -- common/autotest_common.sh@10 -- # set +x 00:30:35.717 [2024-06-10 12:08:29.320765] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:35.717 [2024-06-10 12:08:29.320821] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140123 ] 00:30:35.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:35.717 Zero copy mechanism will not be used. 00:30:35.717 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.717 [2024-06-10 12:08:29.396107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.717 [2024-06-10 12:08:29.447798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.659 12:08:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:36.659 12:08:30 -- common/autotest_common.sh@852 -- # return 0 00:30:36.659 12:08:30 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:36.659 12:08:30 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:36.659 12:08:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:36.659 12:08:30 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.659 12:08:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.920 nvme0n1 00:30:36.920 12:08:30 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:36.920 12:08:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:37.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:37.181 Zero copy mechanism will not be used. 00:30:37.181 Running I/O for 2 seconds... 00:30:39.093 00:30:39.093 Latency(us) 00:30:39.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.093 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:39.093 nvme0n1 : 2.00 4681.49 585.19 0.00 0.00 3411.48 1631.57 13981.01 00:30:39.093 =================================================================================================================== 00:30:39.093 Total : 4681.49 585.19 0.00 0.00 3411.48 1631.57 13981.01 00:30:39.093 0 00:30:39.093 12:08:32 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:39.093 12:08:32 -- host/digest.sh@92 -- # get_accel_stats 00:30:39.093 12:08:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:39.093 12:08:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:39.093 | select(.opcode=="crc32c") 00:30:39.093 | "\(.module_name) \(.executed)"' 00:30:39.093 12:08:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:39.352 12:08:32 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:39.352 12:08:32 -- host/digest.sh@93 -- # exp_module=software 00:30:39.352 12:08:32 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:39.352 12:08:32 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:39.352 12:08:32 -- host/digest.sh@97 -- # killprocess 2140123 00:30:39.352 12:08:32 -- common/autotest_common.sh@926 -- # '[' -z 2140123 ']' 00:30:39.352 12:08:32 -- common/autotest_common.sh@930 -- # kill -0 2140123 00:30:39.352 12:08:32 -- common/autotest_common.sh@931 -- # uname 00:30:39.352 12:08:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:39.352 12:08:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2140123 00:30:39.352 12:08:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:39.352 12:08:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:39.352 12:08:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2140123' 00:30:39.352 killing process with pid 2140123 00:30:39.352 12:08:32 -- common/autotest_common.sh@945 -- # kill 2140123 00:30:39.352 Received shutdown signal, test time was about 2.000000 seconds 00:30:39.352 00:30:39.352 Latency(us) 00:30:39.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.352 =================================================================================================================== 00:30:39.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:39.352 12:08:32 -- common/autotest_common.sh@950 -- # wait 2140123 00:30:39.352 12:08:33 -- host/digest.sh@126 -- # killprocess 2137692 00:30:39.352 12:08:33 -- common/autotest_common.sh@926 -- # '[' -z 2137692 ']' 00:30:39.352 12:08:33 -- common/autotest_common.sh@930 -- # kill -0 2137692 00:30:39.352 12:08:33 -- common/autotest_common.sh@931 -- # uname 00:30:39.352 12:08:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:39.352 12:08:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2137692 00:30:39.613 12:08:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:39.613 12:08:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:39.613 12:08:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2137692' 00:30:39.613 killing process with pid 2137692 00:30:39.613 12:08:33 -- common/autotest_common.sh@945 -- # kill 2137692 00:30:39.613 12:08:33 -- common/autotest_common.sh@950 -- # wait 2137692 00:30:39.613 00:30:39.613 real 0m16.239s 00:30:39.613 user 0m31.538s 00:30:39.613 sys 0m3.415s 00:30:39.613 12:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:39.613 12:08:33 -- common/autotest_common.sh@10 -- # set +x 00:30:39.613 ************************************ 00:30:39.613 END TEST nvmf_digest_clean 00:30:39.613 ************************************ 00:30:39.613 12:08:33 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:30:39.613 12:08:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:39.613 12:08:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:39.613 12:08:33 -- common/autotest_common.sh@10 -- # set +x 00:30:39.613 ************************************ 00:30:39.613 START TEST nvmf_digest_error 00:30:39.613 ************************************ 00:30:39.613 12:08:33 -- common/autotest_common.sh@1104 -- # run_digest_error 00:30:39.613 12:08:33 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:30:39.613 12:08:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:39.613 12:08:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:39.613 12:08:33 -- common/autotest_common.sh@10 -- # set +x 00:30:39.613 12:08:33 -- nvmf/common.sh@469 -- # nvmfpid=2140841 00:30:39.613 12:08:33 -- nvmf/common.sh@470 -- # waitforlisten 2140841 00:30:39.613 12:08:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:39.613 12:08:33 -- common/autotest_common.sh@819 -- # '[' -z 2140841 ']' 00:30:39.613 12:08:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.613 12:08:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:39.613 12:08:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.613 12:08:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:39.613 12:08:33 -- common/autotest_common.sh@10 -- # set +x 00:30:39.613 [2024-06-10 12:08:33.363807] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:39.613 [2024-06-10 12:08:33.363860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.874 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.874 [2024-06-10 12:08:33.428944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.874 [2024-06-10 12:08:33.490486] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:39.874 [2024-06-10 12:08:33.490608] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.874 [2024-06-10 12:08:33.490617] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.874 [2024-06-10 12:08:33.490624] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.874 [2024-06-10 12:08:33.490643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.445 12:08:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:40.445 12:08:34 -- common/autotest_common.sh@852 -- # return 0 00:30:40.445 12:08:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:40.445 12:08:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:40.445 12:08:34 -- common/autotest_common.sh@10 -- # set +x 00:30:40.445 12:08:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.445 12:08:34 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:40.445 12:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.445 12:08:34 -- common/autotest_common.sh@10 -- # set +x 00:30:40.445 [2024-06-10 12:08:34.152539] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:40.445 12:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:40.445 12:08:34 -- host/digest.sh@104 -- # common_target_config 00:30:40.445 12:08:34 -- host/digest.sh@43 -- # rpc_cmd 00:30:40.445 12:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.445 12:08:34 -- common/autotest_common.sh@10 -- # set +x 00:30:40.706 null0 00:30:40.706 [2024-06-10 12:08:34.233402] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.706 [2024-06-10 12:08:34.257593] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.706 12:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:40.706 12:08:34 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:30:40.706 12:08:34 -- host/digest.sh@54 -- # local rw bs qd 00:30:40.706 12:08:34 -- host/digest.sh@56 -- # rw=randread 00:30:40.706 12:08:34 -- host/digest.sh@56 -- # bs=4096 00:30:40.706 12:08:34 -- host/digest.sh@56 -- # qd=128 00:30:40.706 12:08:34 -- host/digest.sh@58 -- # bperfpid=2141192 00:30:40.706 12:08:34 -- host/digest.sh@60 -- # waitforlisten 2141192 /var/tmp/bperf.sock 00:30:40.706 12:08:34 -- common/autotest_common.sh@819 -- # '[' -z 2141192 ']' 00:30:40.706 12:08:34 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:40.706 12:08:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:40.706 12:08:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:40.706 12:08:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:40.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:40.706 12:08:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:40.706 12:08:34 -- common/autotest_common.sh@10 -- # set +x 00:30:40.706 [2024-06-10 12:08:34.306415] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:40.706 [2024-06-10 12:08:34.306467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141192 ] 00:30:40.706 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.706 [2024-06-10 12:08:34.358886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.706 [2024-06-10 12:08:34.410872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.648 12:08:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:41.648 12:08:35 -- common/autotest_common.sh@852 -- # return 0 00:30:41.648 12:08:35 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:41.648 12:08:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:41.648 12:08:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:41.648 12:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.648 12:08:35 -- common/autotest_common.sh@10 -- # set +x 00:30:41.648 12:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.648 12:08:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:41.648 12:08:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:41.909 nvme0n1 00:30:41.909 12:08:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:41.909 12:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.909 12:08:35 -- common/autotest_common.sh@10 -- # set +x 00:30:41.909 12:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.909 12:08:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:41.909 12:08:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:41.909 Running I/O for 2 seconds... 00:30:41.909 [2024-06-10 12:08:35.596042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:41.909 [2024-06-10 12:08:35.596073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.909 [2024-06-10 12:08:35.596081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.909 [2024-06-10 12:08:35.606201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:41.909 [2024-06-10 12:08:35.606221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.909 [2024-06-10 12:08:35.606230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.909 [2024-06-10 12:08:35.619607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:41.909 [2024-06-10 12:08:35.619627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.909 [2024-06-10 12:08:35.619633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.909 [2024-06-10 12:08:35.631398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:41.909 [2024-06-10 12:08:35.631416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.909 [2024-06-10 12:08:35.631423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.909 [2024-06-10 12:08:35.643045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:41.909 [2024-06-10 12:08:35.643062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.909 [2024-06-10 12:08:35.643068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.909 [2024-06-10 12:08:35.654077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:41.909 [2024-06-10 12:08:35.654094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.909 [2024-06-10 12:08:35.654101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.909 [2024-06-10 12:08:35.665855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:41.909 [2024-06-10 12:08:35.665872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.909 [2024-06-10 12:08:35.665879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.909 [2024-06-10 12:08:35.676857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:41.909 [2024-06-10 12:08:35.676874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.909 [2024-06-10 12:08:35.676880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.687865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.687882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.687889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.699033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.699050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.699060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.710953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.710970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.710977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.721971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.721988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.721995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.732991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.733007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.733014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.744967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.744983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.744990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.755994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.756011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.756017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.767826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.767843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.767849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.778688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.778704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.778711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.789639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.789655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.789662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.801621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.801641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.171 [2024-06-10 12:08:35.801647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.171 [2024-06-10 12:08:35.812711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.171 [2024-06-10 12:08:35.812728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.812735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.824066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.824082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.824089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.835093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.835110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.835116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.846939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.846954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.846960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.857493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.857510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.857516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.869866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.869881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.869887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.881261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.881277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.881283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.892271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.892288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.892295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.903439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.903455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.903461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.915336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.915353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.915359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.926864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.926879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.926886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.172 [2024-06-10 12:08:35.940716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.172 [2024-06-10 12:08:35.940733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.172 [2024-06-10 12:08:35.940739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.433 [2024-06-10 12:08:35.950446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.433 [2024-06-10 12:08:35.950463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.433 [2024-06-10 12:08:35.950469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.433 [2024-06-10 12:08:35.962057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.433 [2024-06-10 12:08:35.962073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:35.962080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:35.974214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:35.974231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:35.974237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:35.984875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:35.984891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:35.984897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:35.996913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:35.996929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:35.996941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.007591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.007609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.007616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.018794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.018812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.018818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.030726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.030743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.030749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.041585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.041602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.041608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.052531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.052547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.052553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.064606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.064623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.064629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.075719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.075735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.075741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.086679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.086696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.086703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.097682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.097698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.097704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.109729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.109745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.109752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.120404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.120420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.120426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.132505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.132520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.132526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.143422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.143438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.143444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.154445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.154460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.154466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.166397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.166413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.166419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.177481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.177497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.177503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.188428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.188444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.188453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.434 [2024-06-10 12:08:36.199538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.434 [2024-06-10 12:08:36.199554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.434 [2024-06-10 12:08:36.199560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.696 [2024-06-10 12:08:36.211405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.696 [2024-06-10 12:08:36.211422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.696 [2024-06-10 12:08:36.211429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.696 [2024-06-10 12:08:36.222494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.696 [2024-06-10 12:08:36.222510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.222516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.233674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.233691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.233697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.245330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.245346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.245352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.256221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.256237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.256246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.267864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.267881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.267887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.278993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.279009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.279015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.289853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.289872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.289878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.302002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.302018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.302024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.312800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.312816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.312823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.323728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.323744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.323750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.335024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.335041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.335047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.346888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.346904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.346910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.357954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.357970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.357976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.368887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.368903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.368909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.380692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.380708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.380714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.391691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.391706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.391712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.403605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.403620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.403626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.414598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.414614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.414620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.425485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.425501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.425507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.436613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.436628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.436634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.448486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.448502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.448508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.697 [2024-06-10 12:08:36.459421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.697 [2024-06-10 12:08:36.459437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.697 [2024-06-10 12:08:36.459443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.958 [2024-06-10 12:08:36.470680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.958 [2024-06-10 12:08:36.470697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-06-10 12:08:36.470703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.958 [2024-06-10 12:08:36.482649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.958 [2024-06-10 12:08:36.482665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-06-10 12:08:36.482675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.958 [2024-06-10 12:08:36.493487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.958 [2024-06-10 12:08:36.493504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-06-10 12:08:36.493511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.958 [2024-06-10 12:08:36.504643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.958 [2024-06-10 12:08:36.504660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-06-10 12:08:36.504666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.958 [2024-06-10 12:08:36.516458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.516474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.516480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.527553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.527570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.527576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.538591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.538607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.538613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.549749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.549765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.549771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.561651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.561667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.561674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.572748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.572764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.572770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.583716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.583735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.583741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.594841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.594857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.594863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.606710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.606727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.606733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.617700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.617716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.617722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.629520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.629535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.629541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.640380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.640395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.640401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.651393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.651409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.651415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.662544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.662560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.662566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.674452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.674468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.674474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.685501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.685517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.685523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.696550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.696566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.696572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.708059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.708075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.708081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.959 [2024-06-10 12:08:36.719389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:42.959 [2024-06-10 12:08:36.719405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.959 [2024-06-10 12:08:36.719411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.730396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.730413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.730419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.742286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.742302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.742308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.753247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.753263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.753269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.764289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.764306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.764312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.776190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.776206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.776215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.787273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.787288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.787295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.798109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.798125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.798131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.810034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.810050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.810056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.821022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.821038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.821044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.832097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.832113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.832119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.843990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.844006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.844012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.855067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.855083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.855089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.866057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.866074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.866079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.878608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.878624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.878630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.890254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.890270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.890276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.900547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.900563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.900569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.912373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.912389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.912396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.924462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.924477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.924483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.934382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.934397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.934404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.946230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.946249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.946256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.957194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.957210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.957217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.968984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.969000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.969009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.979648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.979664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.979670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.221 [2024-06-10 12:08:36.991408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.221 [2024-06-10 12:08:36.991424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.221 [2024-06-10 12:08:36.991430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-10 12:08:37.002434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.483 [2024-06-10 12:08:37.002450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-10 12:08:37.002456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-10 12:08:37.013333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.483 [2024-06-10 12:08:37.013349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-10 12:08:37.013355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-10 12:08:37.025153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.483 [2024-06-10 12:08:37.025169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-10 12:08:37.025175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-10 12:08:37.036232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.483 [2024-06-10 12:08:37.036251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-10 12:08:37.036258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-10 12:08:37.047197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.483 [2024-06-10 12:08:37.047212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-10 12:08:37.047218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-10 12:08:37.059151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.483 [2024-06-10 12:08:37.059166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-10 12:08:37.059174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-10 12:08:37.070210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.483 [2024-06-10 12:08:37.070231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-10 12:08:37.070237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-10 12:08:37.080981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.483 [2024-06-10 12:08:37.080997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-10 12:08:37.081003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-10 12:08:37.093106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.483 [2024-06-10 12:08:37.093122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.093129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.103884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.103901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.103907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.115025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.115041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.115048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.126514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.126530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.126537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.138806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.138823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.138829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.149879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.149895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.149901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.160591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.160607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.160613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.172471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.172487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.172493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.183576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.183592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.183599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.194694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.194709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.194715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.206462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.206478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.206484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.217620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.217636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.217642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.228564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.228581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.228587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.239501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.239517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.239523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.484 [2024-06-10 12:08:37.251471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.484 [2024-06-10 12:08:37.251487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.484 [2024-06-10 12:08:37.251493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.262549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.262565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.262574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.273543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.273559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.273565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.284690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.284706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.284712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.296553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.296569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.296575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.307672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.307688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.307694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.318666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.318683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.318689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.330543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.330559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.330565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.341417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.341433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.341439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.352458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.352475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.352481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.364395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.364414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.364420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.375502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.375519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.375525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.386560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.386577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.386583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.398390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.398407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.398413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.409561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.409577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.409583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.420531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.420547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.420553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.431678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.431694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.431700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.443546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.443562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.443568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.454659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.454674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.454680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.465800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.465816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.746 [2024-06-10 12:08:37.465822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.746 [2024-06-10 12:08:37.477492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.746 [2024-06-10 12:08:37.477509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.747 [2024-06-10 12:08:37.477515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.747 [2024-06-10 12:08:37.488523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.747 [2024-06-10 12:08:37.488539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.747 [2024-06-10 12:08:37.488545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.747 [2024-06-10 12:08:37.499524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.747 [2024-06-10 12:08:37.499540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.747 [2024-06-10 12:08:37.499546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.747 [2024-06-10 12:08:37.511233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:43.747 [2024-06-10 12:08:37.511253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.747 [2024-06-10 12:08:37.511259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.008 [2024-06-10 12:08:37.522058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:44.008 [2024-06-10 12:08:37.522074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.008 [2024-06-10 12:08:37.522080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.008 [2024-06-10 12:08:37.534145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:44.008 [2024-06-10 12:08:37.534162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.008 [2024-06-10 12:08:37.534168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.008 [2024-06-10 12:08:37.544984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:44.008 [2024-06-10 12:08:37.545002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.008 [2024-06-10 12:08:37.545008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.008 [2024-06-10 12:08:37.555908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:44.008 [2024-06-10 12:08:37.555926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.008 [2024-06-10 12:08:37.555935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.008 [2024-06-10 12:08:37.567796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:44.008 [2024-06-10 12:08:37.567813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.008 [2024-06-10 12:08:37.567819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.008 [2024-06-10 12:08:37.578793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x942070) 00:30:44.008 [2024-06-10 12:08:37.578810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.008 [2024-06-10 12:08:37.578816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.008 00:30:44.008 Latency(us) 00:30:44.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.008 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:44.008 nvme0n1 : 2.00 22487.30 87.84 0.00 0.00 5686.55 2239.15 14308.69 00:30:44.008 =================================================================================================================== 00:30:44.008 Total : 22487.30 87.84 0.00 0.00 5686.55 2239.15 14308.69 00:30:44.008 0 00:30:44.008 12:08:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:44.008 12:08:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:44.008 12:08:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:44.008 | .driver_specific 00:30:44.008 | .nvme_error 00:30:44.008 | .status_code 00:30:44.008 | .command_transient_transport_error' 00:30:44.008 12:08:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:44.008 12:08:37 -- host/digest.sh@71 -- # (( 176 > 0 )) 00:30:44.008 12:08:37 -- host/digest.sh@73 -- # killprocess 2141192 00:30:44.008 12:08:37 -- common/autotest_common.sh@926 -- # '[' -z 2141192 ']' 00:30:44.008 12:08:37 -- common/autotest_common.sh@930 -- # kill -0 2141192 00:30:44.008 12:08:37 -- common/autotest_common.sh@931 -- # uname 00:30:44.008 12:08:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:44.008 12:08:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2141192 00:30:44.270 12:08:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:44.270 12:08:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:44.270 12:08:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2141192' 00:30:44.270 killing process with pid 2141192 00:30:44.270 12:08:37 -- common/autotest_common.sh@945 -- # kill 2141192 00:30:44.270 Received shutdown signal, test time was about 2.000000 seconds 00:30:44.270 00:30:44.270 Latency(us) 00:30:44.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.270 =================================================================================================================== 00:30:44.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.270 12:08:37 -- common/autotest_common.sh@950 -- # wait 2141192 00:30:44.270 12:08:37 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:30:44.270 12:08:37 -- host/digest.sh@54 -- # local rw bs qd 00:30:44.270 12:08:37 -- host/digest.sh@56 -- # rw=randread 00:30:44.270 12:08:37 -- host/digest.sh@56 -- # bs=131072 00:30:44.270 12:08:37 -- host/digest.sh@56 -- # qd=16 00:30:44.270 12:08:37 -- host/digest.sh@58 -- # bperfpid=2141884 00:30:44.270 12:08:37 -- host/digest.sh@60 -- # waitforlisten 2141884 /var/tmp/bperf.sock 00:30:44.270 12:08:37 -- common/autotest_common.sh@819 -- # '[' -z 2141884 ']' 00:30:44.270 12:08:37 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:44.270 12:08:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:44.270 12:08:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:44.270 12:08:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:44.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:44.270 12:08:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:44.270 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:30:44.270 [2024-06-10 12:08:37.981392] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:44.270 [2024-06-10 12:08:37.981447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141884 ] 00:30:44.270 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:44.270 Zero copy mechanism will not be used. 00:30:44.270 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.531 [2024-06-10 12:08:38.057918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.531 [2024-06-10 12:08:38.108665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.104 12:08:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:45.104 12:08:38 -- common/autotest_common.sh@852 -- # return 0 00:30:45.104 12:08:38 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:45.104 12:08:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:45.104 12:08:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:45.104 12:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.104 12:08:38 -- common/autotest_common.sh@10 -- # set +x 00:30:45.365 12:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.365 12:08:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.365 12:08:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.365 nvme0n1 00:30:45.627 12:08:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:45.627 12:08:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.627 12:08:39 -- common/autotest_common.sh@10 -- # set +x 00:30:45.627 12:08:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.627 12:08:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:45.627 12:08:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:45.627 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:45.627 Zero copy mechanism will not be used. 00:30:45.627 Running I/O for 2 seconds... 00:30:45.627 [2024-06-10 12:08:39.256976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.627 [2024-06-10 12:08:39.257010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.627 [2024-06-10 12:08:39.257019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.627 [2024-06-10 12:08:39.267567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.627 [2024-06-10 12:08:39.267589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.627 [2024-06-10 12:08:39.267596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.627 [2024-06-10 12:08:39.278567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.627 [2024-06-10 12:08:39.278595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.627 [2024-06-10 12:08:39.278602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.627 [2024-06-10 12:08:39.289343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.627 [2024-06-10 12:08:39.289361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.627 [2024-06-10 12:08:39.289368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.627 [2024-06-10 12:08:39.300246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.627 [2024-06-10 12:08:39.300265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.627 [2024-06-10 12:08:39.300271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.627 [2024-06-10 12:08:39.311621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.627 [2024-06-10 12:08:39.311639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.627 [2024-06-10 12:08:39.311645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.627 [2024-06-10 12:08:39.321916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.628 [2024-06-10 12:08:39.321934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.628 [2024-06-10 12:08:39.321940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.628 [2024-06-10 12:08:39.331552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.628 [2024-06-10 12:08:39.331570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.628 [2024-06-10 12:08:39.331576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.628 [2024-06-10 12:08:39.341322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.628 [2024-06-10 12:08:39.341340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.628 [2024-06-10 12:08:39.341346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.628 [2024-06-10 12:08:39.352399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.628 [2024-06-10 12:08:39.352417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.628 [2024-06-10 12:08:39.352423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.628 [2024-06-10 12:08:39.362602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.628 [2024-06-10 12:08:39.362619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.628 [2024-06-10 12:08:39.362629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.628 [2024-06-10 12:08:39.373825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.628 [2024-06-10 12:08:39.373843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.628 [2024-06-10 12:08:39.373850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.628 [2024-06-10 12:08:39.384960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.628 [2024-06-10 12:08:39.384978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.628 [2024-06-10 12:08:39.384984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.628 [2024-06-10 12:08:39.395158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.628 [2024-06-10 12:08:39.395175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.628 [2024-06-10 12:08:39.395182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.890 [2024-06-10 12:08:39.406922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.890 [2024-06-10 12:08:39.406940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.890 [2024-06-10 12:08:39.406946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.890 [2024-06-10 12:08:39.417749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.890 [2024-06-10 12:08:39.417767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.890 [2024-06-10 12:08:39.417773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.890 [2024-06-10 12:08:39.429856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.890 [2024-06-10 12:08:39.429874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.429880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.442189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.442206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.442213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.452303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.452321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.452327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.467206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.467228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.467235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.476623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.476641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.476648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.486266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.486284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.486291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.494364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.494382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.494389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.507897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.507916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.507922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.516366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.516384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.516391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.525615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.525632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.525638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.533463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.533481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.533487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.541209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.541226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.541232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.548331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.548348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.548355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.554491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.554509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.554515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.560272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.560289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.560295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.565668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.565685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.565691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.570771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.570789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.570794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.574885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.574903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.574909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.581897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.581915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.581922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.587742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.587759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.587766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.596165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.596182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.596192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.605715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.605733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.605739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.615887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.615905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.615911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.626317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.626334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.626340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.636224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.636241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.636251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.648348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.648365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.648371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.891 [2024-06-10 12:08:39.659519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:45.891 [2024-06-10 12:08:39.659536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.891 [2024-06-10 12:08:39.659542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.671204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.671222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.671228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.680604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.680621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.680627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.690227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.690253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.690260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.700127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.700144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.700151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.711628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.711644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.711651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.722421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.722438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.722445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.732796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.732813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.732819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.741464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.741481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.741487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.751767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.751784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.751790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.764052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.764069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.764075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.776847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.776864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.776871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.787572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.787589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.153 [2024-06-10 12:08:39.787595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.153 [2024-06-10 12:08:39.798943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.153 [2024-06-10 12:08:39.798960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.798967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.809730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.809747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.809753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.820074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.820091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.820098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.830525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.830542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.830547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.841893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.841911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.841917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.852839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.852856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.852862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.865131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.865149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.865155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.876360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.876377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.876386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.886356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.886373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.886379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.898380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.898397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.898404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.911542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.911559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.911566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.154 [2024-06-10 12:08:39.921165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.154 [2024-06-10 12:08:39.921182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.154 [2024-06-10 12:08:39.921189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:39.931941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:39.931959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:39.931965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:39.943523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:39.943541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:39.943547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:39.950097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:39.950114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:39.950121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:39.961392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:39.961409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:39.961416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:39.972704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:39.972721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:39.972727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:39.984047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:39.984065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:39.984071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:39.995117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:39.995134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:39.995141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:40.006776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:40.006795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:40.006802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:40.019613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:40.019632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:40.019639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:40.032321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:40.032339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.415 [2024-06-10 12:08:40.032345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.415 [2024-06-10 12:08:40.044167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.415 [2024-06-10 12:08:40.044188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.044195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.055165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.055183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.055189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.065985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.066002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.066012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.074748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.074765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.074771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.085639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.085657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.085663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.097204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.097222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.097228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.108921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.108938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.108945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.120745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.120763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.120769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.132180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.132197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.132204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.142929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.142946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.142954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.153226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.153247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.153254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.163882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.163902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.163908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.416 [2024-06-10 12:08:40.175432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.416 [2024-06-10 12:08:40.175450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.416 [2024-06-10 12:08:40.175456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.187651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.187668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.187675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.198772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.198789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.198797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.209012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.209030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.209036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.218615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.218632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.218639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.227351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.227368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.227374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.238474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.238491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.238497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.248213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.248231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.248237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.259659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.259677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.259683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.271028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.271045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.271051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.283335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.283353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.283359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.292842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.292860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.292866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.302368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.302386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.302392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.313799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.313816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.313822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.323349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.323366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.323372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.331317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.331334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.331340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.338812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.677 [2024-06-10 12:08:40.338831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.677 [2024-06-10 12:08:40.338840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.677 [2024-06-10 12:08:40.348373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.348391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.348397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.678 [2024-06-10 12:08:40.358042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.358060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.358066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.678 [2024-06-10 12:08:40.368382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.368401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.368407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.678 [2024-06-10 12:08:40.378938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.378956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.378962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.678 [2024-06-10 12:08:40.388108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.388126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.388132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.678 [2024-06-10 12:08:40.399370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.399388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.399395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.678 [2024-06-10 12:08:40.410112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.410129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.410136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.678 [2024-06-10 12:08:40.421927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.421946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.421952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.678 [2024-06-10 12:08:40.433113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.433131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.433138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.678 [2024-06-10 12:08:40.443313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.678 [2024-06-10 12:08:40.443331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.678 [2024-06-10 12:08:40.443337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.455423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.455442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.455448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.468162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.468179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.468186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.478772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.478790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.478796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.488766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.488784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.488790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.500273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.500292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.500298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.511515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.511533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.511539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.523184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.523203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.523213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.533451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.533468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.533475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.545678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.545696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.545702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.555776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.555794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.555800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.565729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.565748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.565754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.574908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.574927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.574933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.586456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.586475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.586482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.598900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.598918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.598925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.609964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.609983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.609989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.620712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.620737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.620744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.632647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.632666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.632672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.645378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.645396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.645402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.656654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.656673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.656679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.668483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.668501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.668508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.681466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.681484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.681490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.692567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.692586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.692592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.940 [2024-06-10 12:08:40.703567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:46.940 [2024-06-10 12:08:40.703585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.940 [2024-06-10 12:08:40.703591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.713885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.713905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.713911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.725356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.725374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.725380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.737105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.737123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.737130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.750607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.750625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.750631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.763039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.763057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.763063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.776717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.776735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.776742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.790108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.790127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.790133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.804057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.804076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.804082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.815166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.815185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.815191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.826079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.826097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.826106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.836235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.836258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.836264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.846411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.846429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.846435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.856195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.856213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.856219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.202 [2024-06-10 12:08:40.867658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.202 [2024-06-10 12:08:40.867676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.202 [2024-06-10 12:08:40.867683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.203 [2024-06-10 12:08:40.878325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.203 [2024-06-10 12:08:40.878344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.203 [2024-06-10 12:08:40.878350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.203 [2024-06-10 12:08:40.888542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.203 [2024-06-10 12:08:40.888560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.203 [2024-06-10 12:08:40.888567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.203 [2024-06-10 12:08:40.900952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.203 [2024-06-10 12:08:40.900970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.203 [2024-06-10 12:08:40.900977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.203 [2024-06-10 12:08:40.913706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.203 [2024-06-10 12:08:40.913724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.203 [2024-06-10 12:08:40.913730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.203 [2024-06-10 12:08:40.925912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.203 [2024-06-10 12:08:40.925934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.203 [2024-06-10 12:08:40.925940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.203 [2024-06-10 12:08:40.935845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.203 [2024-06-10 12:08:40.935864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.203 [2024-06-10 12:08:40.935870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.203 [2024-06-10 12:08:40.947343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.203 [2024-06-10 12:08:40.947362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.203 [2024-06-10 12:08:40.947369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.203 [2024-06-10 12:08:40.960444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.203 [2024-06-10 12:08:40.960463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.203 [2024-06-10 12:08:40.960469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.203 [2024-06-10 12:08:40.971561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.203 [2024-06-10 12:08:40.971581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.203 [2024-06-10 12:08:40.971587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:40.982488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:40.982506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:40.982512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:40.994695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:40.994713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:40.994719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.005547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.005566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.005572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.015487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.015505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.015515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.027093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.027111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.027118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.038151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.038169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.038176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.050858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.050877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.050884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.062341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.062360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.062366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.073405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.073424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.073431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.085846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.085864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.085871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.095020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.095038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.095045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.105846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.105864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.105870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.117438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.117459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.117465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.130442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.130460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.130466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.140825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.140844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.140850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.152089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.464 [2024-06-10 12:08:41.152108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.464 [2024-06-10 12:08:41.152114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.464 [2024-06-10 12:08:41.162215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.465 [2024-06-10 12:08:41.162233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.465 [2024-06-10 12:08:41.162239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.465 [2024-06-10 12:08:41.172069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.465 [2024-06-10 12:08:41.172087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.465 [2024-06-10 12:08:41.172094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.465 [2024-06-10 12:08:41.182313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.465 [2024-06-10 12:08:41.182331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.465 [2024-06-10 12:08:41.182337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.465 [2024-06-10 12:08:41.193336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.465 [2024-06-10 12:08:41.193354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.465 [2024-06-10 12:08:41.193360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.465 [2024-06-10 12:08:41.202462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.465 [2024-06-10 12:08:41.202479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.465 [2024-06-10 12:08:41.202486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.465 [2024-06-10 12:08:41.211578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.465 [2024-06-10 12:08:41.211596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.465 [2024-06-10 12:08:41.211603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.465 [2024-06-10 12:08:41.223852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.465 [2024-06-10 12:08:41.223871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.465 [2024-06-10 12:08:41.223877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.725 [2024-06-10 12:08:41.236031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.725 [2024-06-10 12:08:41.236050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.726 [2024-06-10 12:08:41.236056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.726 [2024-06-10 12:08:41.249741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2090d00) 00:30:47.726 [2024-06-10 12:08:41.249759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.726 [2024-06-10 12:08:41.249766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.726 00:30:47.726 Latency(us) 00:30:47.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.726 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:47.726 nvme0n1 : 2.01 2895.84 361.98 0.00 0.00 5520.03 894.29 14090.24 00:30:47.726 =================================================================================================================== 00:30:47.726 Total : 2895.84 361.98 0.00 0.00 5520.03 894.29 14090.24 00:30:47.726 0 00:30:47.726 12:08:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:47.726 12:08:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:47.726 12:08:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:47.726 12:08:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:47.726 | .driver_specific 00:30:47.726 | .nvme_error 00:30:47.726 | .status_code 00:30:47.726 | .command_transient_transport_error' 00:30:47.726 12:08:41 -- host/digest.sh@71 -- # (( 187 > 0 )) 00:30:47.726 12:08:41 -- host/digest.sh@73 -- # killprocess 2141884 00:30:47.726 12:08:41 -- common/autotest_common.sh@926 -- # '[' -z 2141884 ']' 00:30:47.726 12:08:41 -- common/autotest_common.sh@930 -- # kill -0 2141884 00:30:47.726 12:08:41 -- common/autotest_common.sh@931 -- # uname 00:30:47.726 12:08:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:47.726 12:08:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2141884 00:30:47.726 12:08:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:47.726 12:08:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:47.726 12:08:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2141884' 00:30:47.726 killing process with pid 2141884 00:30:47.726 12:08:41 -- common/autotest_common.sh@945 -- # kill 2141884 00:30:47.726 Received shutdown signal, test time was about 2.000000 seconds 00:30:47.726 00:30:47.726 Latency(us) 00:30:47.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.726 =================================================================================================================== 00:30:47.726 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:47.726 12:08:41 -- common/autotest_common.sh@950 -- # wait 2141884 00:30:47.986 12:08:41 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:30:47.986 12:08:41 -- host/digest.sh@54 -- # local rw bs qd 00:30:47.986 12:08:41 -- host/digest.sh@56 -- # rw=randwrite 00:30:47.986 12:08:41 -- host/digest.sh@56 -- # bs=4096 00:30:47.986 12:08:41 -- host/digest.sh@56 -- # qd=128 00:30:47.986 12:08:41 -- host/digest.sh@58 -- # bperfpid=2142578 00:30:47.986 12:08:41 -- host/digest.sh@60 -- # waitforlisten 2142578 /var/tmp/bperf.sock 00:30:47.986 12:08:41 -- common/autotest_common.sh@819 -- # '[' -z 2142578 ']' 00:30:47.986 12:08:41 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:47.986 12:08:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:47.986 12:08:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:47.986 12:08:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:47.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:47.986 12:08:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:47.986 12:08:41 -- common/autotest_common.sh@10 -- # set +x 00:30:47.986 [2024-06-10 12:08:41.648441] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:47.986 [2024-06-10 12:08:41.648497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142578 ] 00:30:47.986 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.986 [2024-06-10 12:08:41.724416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.247 [2024-06-10 12:08:41.776365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.818 12:08:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:48.818 12:08:42 -- common/autotest_common.sh@852 -- # return 0 00:30:48.818 12:08:42 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:48.818 12:08:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:48.818 12:08:42 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:48.818 12:08:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.818 12:08:42 -- common/autotest_common.sh@10 -- # set +x 00:30:48.818 12:08:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.818 12:08:42 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:48.818 12:08:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.078 nvme0n1 00:30:49.078 12:08:42 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:49.078 12:08:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.079 12:08:42 -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 12:08:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.079 12:08:42 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:49.079 12:08:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:49.340 Running I/O for 2 seconds... 00:30:49.340 [2024-06-10 12:08:42.910604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f8e88 00:30:49.340 [2024-06-10 12:08:42.911374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.340 [2024-06-10 12:08:42.911401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:49.340 [2024-06-10 12:08:42.922126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f46d0 00:30:49.340 [2024-06-10 12:08:42.922899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.340 [2024-06-10 12:08:42.922917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:49.340 [2024-06-10 12:08:42.933583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fa7d8 00:30:49.340 [2024-06-10 12:08:42.934355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.340 [2024-06-10 12:08:42.934372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:49.340 [2024-06-10 12:08:42.945003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f2948 00:30:49.340 [2024-06-10 12:08:42.945784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.340 [2024-06-10 12:08:42.945801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:49.340 [2024-06-10 12:08:42.956456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190efae0 00:30:49.341 [2024-06-10 12:08:42.957236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:42.957257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:42.969878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ecc78 00:30:49.341 [2024-06-10 12:08:42.970613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:42.970630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:42.981286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e9168 00:30:49.341 [2024-06-10 12:08:42.981975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:42.981991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:42.992697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e23b8 00:30:49.341 [2024-06-10 12:08:42.993399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:42.993416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.004131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eaab8 00:30:49.341 [2024-06-10 12:08:43.004846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.004862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.015563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8d30 00:30:49.341 [2024-06-10 12:08:43.016285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.016301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.026997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e1f80 00:30:49.341 [2024-06-10 12:08:43.027716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.027732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.038379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8088 00:30:49.341 [2024-06-10 12:08:43.039057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.039073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.049764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8d30 00:30:49.341 [2024-06-10 12:08:43.050335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.050351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.061188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190efae0 00:30:49.341 [2024-06-10 12:08:43.061897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.061913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.072549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ed0b0 00:30:49.341 [2024-06-10 12:08:43.073252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.073268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.083966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8d30 00:30:49.341 [2024-06-10 12:08:43.084661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.084677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.095403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f1ca0 00:30:49.341 [2024-06-10 12:08:43.096098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.096114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:49.341 [2024-06-10 12:08:43.106763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e27f0 00:30:49.341 [2024-06-10 12:08:43.107464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.341 [2024-06-10 12:08:43.107480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.118111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e6b70 00:30:49.602 [2024-06-10 12:08:43.118791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.118810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.129467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e9168 00:30:49.602 [2024-06-10 12:08:43.130139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.130155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.140826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eea00 00:30:49.602 [2024-06-10 12:08:43.141386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.141403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.152180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190efae0 00:30:49.602 [2024-06-10 12:08:43.152675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.152691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.163628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ebfd0 00:30:49.602 [2024-06-10 12:08:43.164262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.164278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.174999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ee190 00:30:49.602 [2024-06-10 12:08:43.175609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.175625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.186419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eaab8 00:30:49.602 [2024-06-10 12:08:43.187021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.187037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.197775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f9f68 00:30:49.602 [2024-06-10 12:08:43.198229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.198249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.209141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f8a50 00:30:49.602 [2024-06-10 12:08:43.209709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.209725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.222401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f4b08 00:30:49.602 [2024-06-10 12:08:43.224012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.602 [2024-06-10 12:08:43.224029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:49.602 [2024-06-10 12:08:43.233754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5220 00:30:49.602 [2024-06-10 12:08:43.235377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.235393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.245106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f6890 00:30:49.603 [2024-06-10 12:08:43.246709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.246725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.256450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f57b0 00:30:49.603 [2024-06-10 12:08:43.258077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.258092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.267779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f4f40 00:30:49.603 [2024-06-10 12:08:43.269422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.269438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.279152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e1f80 00:30:49.603 [2024-06-10 12:08:43.280780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.280796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.290492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e3d08 00:30:49.603 [2024-06-10 12:08:43.292138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.292154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.299386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e6b70 00:30:49.603 [2024-06-10 12:08:43.299556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.299572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.310789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e7c50 00:30:49.603 [2024-06-10 12:08:43.311035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.311051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.322146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ebb98 00:30:49.603 [2024-06-10 12:08:43.322303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.322319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.333525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ecc78 00:30:49.603 [2024-06-10 12:08:43.333805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.333822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.344904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f96f8 00:30:49.603 [2024-06-10 12:08:43.345024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.345039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.356290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ee190 00:30:49.603 [2024-06-10 12:08:43.356492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.356507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:49.603 [2024-06-10 12:08:43.367699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eff18 00:30:49.603 [2024-06-10 12:08:43.368005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.603 [2024-06-10 12:08:43.368022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.379246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f35f0 00:30:49.864 [2024-06-10 12:08:43.379508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.379524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.390636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e84c0 00:30:49.864 [2024-06-10 12:08:43.390874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.390889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.402038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f0788 00:30:49.864 [2024-06-10 12:08:43.402287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.402302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.413433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fbcf0 00:30:49.864 [2024-06-10 12:08:43.413674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.413692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.427088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eaef0 00:30:49.864 [2024-06-10 12:08:43.428714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.428731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.438488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fe2e8 00:30:49.864 [2024-06-10 12:08:43.440038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.440054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.449870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e84c0 00:30:49.864 [2024-06-10 12:08:43.451486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.451502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.459639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e73e0 00:30:49.864 [2024-06-10 12:08:43.460231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.460250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.470892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e49b0 00:30:49.864 [2024-06-10 12:08:43.471887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.471903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.482230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ed4e8 00:30:49.864 [2024-06-10 12:08:43.483231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.483250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.493553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8088 00:30:49.864 [2024-06-10 12:08:43.494560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.864 [2024-06-10 12:08:43.494576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:49.864 [2024-06-10 12:08:43.504889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8088 00:30:49.864 [2024-06-10 12:08:43.505895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.505911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.516234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ed4e8 00:30:49.865 [2024-06-10 12:08:43.517260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.517276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.527597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fa3a0 00:30:49.865 [2024-06-10 12:08:43.528616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.528632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.538942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fc998 00:30:49.865 [2024-06-10 12:08:43.539966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.539982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.550368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5658 00:30:49.865 [2024-06-10 12:08:43.551264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.551280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.561787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f96f8 00:30:49.865 [2024-06-10 12:08:43.562666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.562681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.573210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ed920 00:30:49.865 [2024-06-10 12:08:43.574166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.574182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.584042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f8e88 00:30:49.865 [2024-06-10 12:08:43.584441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.584456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.595584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f57b0 00:30:49.865 [2024-06-10 12:08:43.596327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.596342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.607009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f7100 00:30:49.865 [2024-06-10 12:08:43.607775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.607791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.618370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f4b08 00:30:49.865 [2024-06-10 12:08:43.619147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.619164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:49.865 [2024-06-10 12:08:43.629734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f9b30 00:30:49.865 [2024-06-10 12:08:43.630518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.865 [2024-06-10 12:08:43.630534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.126 [2024-06-10 12:08:43.641093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8d30 00:30:50.126 [2024-06-10 12:08:43.641879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.126 [2024-06-10 12:08:43.641895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.126 [2024-06-10 12:08:43.652480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f1868 00:30:50.126 [2024-06-10 12:08:43.653200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.126 [2024-06-10 12:08:43.653216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.126 [2024-06-10 12:08:43.665661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eb760 00:30:50.126 [2024-06-10 12:08:43.666333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.126 [2024-06-10 12:08:43.666350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.126 [2024-06-10 12:08:43.677066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e23b8 00:30:50.126 [2024-06-10 12:08:43.677583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.126 [2024-06-10 12:08:43.677600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.126 [2024-06-10 12:08:43.688533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f5378 00:30:50.127 [2024-06-10 12:08:43.689178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.689194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.699935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eaab8 00:30:50.127 [2024-06-10 12:08:43.700469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.700485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.711387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5220 00:30:50.127 [2024-06-10 12:08:43.712092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.712113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.722773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e4de8 00:30:50.127 [2024-06-10 12:08:43.723446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.723462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.734120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f6890 00:30:50.127 [2024-06-10 12:08:43.734769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.734785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.745482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f5378 00:30:50.127 [2024-06-10 12:08:43.746149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.746165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.756837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eaab8 00:30:50.127 [2024-06-10 12:08:43.757516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.757532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.768196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5ec8 00:30:50.127 [2024-06-10 12:08:43.768886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.768902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.779598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eb760 00:30:50.127 [2024-06-10 12:08:43.780278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.780294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.791014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e23b8 00:30:50.127 [2024-06-10 12:08:43.791690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.791706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.802397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ed4e8 00:30:50.127 [2024-06-10 12:08:43.803068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.803083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.813785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5220 00:30:50.127 [2024-06-10 12:08:43.814441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.814456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.825165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eaab8 00:30:50.127 [2024-06-10 12:08:43.825824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.825841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.836539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ff3c8 00:30:50.127 [2024-06-10 12:08:43.837193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.837208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.847915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f5378 00:30:50.127 [2024-06-10 12:08:43.848564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.848580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.859256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f2948 00:30:50.127 [2024-06-10 12:08:43.859881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.859898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.870637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e4de8 00:30:50.127 [2024-06-10 12:08:43.871123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.871139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.882285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e38d0 00:30:50.127 [2024-06-10 12:08:43.882907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.882922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.127 [2024-06-10 12:08:43.893713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f4b08 00:30:50.127 [2024-06-10 12:08:43.894335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.127 [2024-06-10 12:08:43.894351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.388 [2024-06-10 12:08:43.905106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e7c50 00:30:50.388 [2024-06-10 12:08:43.905735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.388 [2024-06-10 12:08:43.905751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.388 [2024-06-10 12:08:43.916450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f5be8 00:30:50.388 [2024-06-10 12:08:43.916941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.388 [2024-06-10 12:08:43.916956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.388 [2024-06-10 12:08:43.928006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f1ca0 00:30:50.388 [2024-06-10 12:08:43.928754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.388 [2024-06-10 12:08:43.928769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.388 [2024-06-10 12:08:43.939423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fb8b8 00:30:50.388 [2024-06-10 12:08:43.940151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.388 [2024-06-10 12:08:43.940167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.388 [2024-06-10 12:08:43.950828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e23b8 00:30:50.388 [2024-06-10 12:08:43.951554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.388 [2024-06-10 12:08:43.951569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.388 [2024-06-10 12:08:43.962249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f1ca0 00:30:50.389 [2024-06-10 12:08:43.962983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:43.962999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:43.973652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e73e0 00:30:50.389 [2024-06-10 12:08:43.974357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:43.974373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:43.985058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fb480 00:30:50.389 [2024-06-10 12:08:43.985721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:43.985738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:43.996539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190edd58 00:30:50.389 [2024-06-10 12:08:43.997228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:43.997248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.007949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f6890 00:30:50.389 [2024-06-10 12:08:44.008518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.008537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.018581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8d30 00:30:50.389 [2024-06-10 12:08:44.019184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.019199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.030250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f7da8 00:30:50.389 [2024-06-10 12:08:44.030730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.030746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.041642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fbcf0 00:30:50.389 [2024-06-10 12:08:44.042042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.042058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.053060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ea248 00:30:50.389 [2024-06-10 12:08:44.053483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.053498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.064522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f4298 00:30:50.389 [2024-06-10 12:08:44.064902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.064918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.075957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fcdd0 00:30:50.389 [2024-06-10 12:08:44.076377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.076393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.087396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5658 00:30:50.389 [2024-06-10 12:08:44.087844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.087860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.098785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f92c0 00:30:50.389 [2024-06-10 12:08:44.099211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.099227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.110134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eb328 00:30:50.389 [2024-06-10 12:08:44.110578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.110596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.121519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f5be8 00:30:50.389 [2024-06-10 12:08:44.121839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.121854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.132928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f2510 00:30:50.389 [2024-06-10 12:08:44.133373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.133389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.144356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f96f8 00:30:50.389 [2024-06-10 12:08:44.144778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.144793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.389 [2024-06-10 12:08:44.155804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f7538 00:30:50.389 [2024-06-10 12:08:44.156136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.389 [2024-06-10 12:08:44.156152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.650 [2024-06-10 12:08:44.167251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f6cc8 00:30:50.650 [2024-06-10 12:08:44.167679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.650 [2024-06-10 12:08:44.167694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.650 [2024-06-10 12:08:44.178661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eb760 00:30:50.651 [2024-06-10 12:08:44.179057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.179074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.190030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f7970 00:30:50.651 [2024-06-10 12:08:44.190457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.190473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.201457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f6cc8 00:30:50.651 [2024-06-10 12:08:44.201728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.201744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.212912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fc998 00:30:50.651 [2024-06-10 12:08:44.213300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.213315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.224319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e1b48 00:30:50.651 [2024-06-10 12:08:44.224702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.224718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.235716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e73e0 00:30:50.651 [2024-06-10 12:08:44.236091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.236107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.247116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190feb58 00:30:50.651 [2024-06-10 12:08:44.247356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.247370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.258484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8088 00:30:50.651 [2024-06-10 12:08:44.258696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.258711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.269973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e1b48 00:30:50.651 [2024-06-10 12:08:44.270303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.270319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.281362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f9f68 00:30:50.651 [2024-06-10 12:08:44.281680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.281696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.292755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e6738 00:30:50.651 [2024-06-10 12:08:44.292932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.292947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.304143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8088 00:30:50.651 [2024-06-10 12:08:44.304431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.304447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.315555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e6738 00:30:50.651 [2024-06-10 12:08:44.315856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.315872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.326931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f6cc8 00:30:50.651 [2024-06-10 12:08:44.327091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.327107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.338328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eaab8 00:30:50.651 [2024-06-10 12:08:44.338586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.338602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.349705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eaef0 00:30:50.651 [2024-06-10 12:08:44.349965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.349982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.363335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5a90 00:30:50.651 [2024-06-10 12:08:44.364949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.364966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.374789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e8088 00:30:50.651 [2024-06-10 12:08:44.376272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.376288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.385733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190eb760 00:30:50.651 [2024-06-10 12:08:44.386783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.386799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.396029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e49b0 00:30:50.651 [2024-06-10 12:08:44.396567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.396583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.407328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f81e0 00:30:50.651 [2024-06-10 12:08:44.408322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.408341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.651 [2024-06-10 12:08:44.418676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e3d08 00:30:50.651 [2024-06-10 12:08:44.419673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.651 [2024-06-10 12:08:44.419689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.430063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5220 00:30:50.913 [2024-06-10 12:08:44.431048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.431064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.441478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e1f80 00:30:50.913 [2024-06-10 12:08:44.442488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.442503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.452825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f6890 00:30:50.913 [2024-06-10 12:08:44.453818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.453834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.464374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fa3a0 00:30:50.913 [2024-06-10 12:08:44.465370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.465386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.475748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fa7d8 00:30:50.913 [2024-06-10 12:08:44.476731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.476747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.486759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fb480 00:30:50.913 [2024-06-10 12:08:44.487411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.487426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.498140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f57b0 00:30:50.913 [2024-06-10 12:08:44.498808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.498823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.509530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fe2e8 00:30:50.913 [2024-06-10 12:08:44.510193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.510209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.520926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190feb58 00:30:50.913 [2024-06-10 12:08:44.521597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.521614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.532290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fac10 00:30:50.913 [2024-06-10 12:08:44.532959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.532975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.545451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f96f8 00:30:50.913 [2024-06-10 12:08:44.546099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.546115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.556838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ec840 00:30:50.913 [2024-06-10 12:08:44.557459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.557476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.568231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f7538 00:30:50.913 [2024-06-10 12:08:44.568830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.568846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.579663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5a90 00:30:50.913 [2024-06-10 12:08:44.580230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.580252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.591086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f7970 00:30:50.913 [2024-06-10 12:08:44.591664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.591681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.602526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ea248 00:30:50.913 [2024-06-10 12:08:44.603094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.603110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.613930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f0bc0 00:30:50.913 [2024-06-10 12:08:44.614554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.614570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.625346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fac10 00:30:50.913 [2024-06-10 12:08:44.625810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.625826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.636740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f46d0 00:30:50.913 [2024-06-10 12:08:44.637344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.637360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.648145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e73e0 00:30:50.913 [2024-06-10 12:08:44.648759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.913 [2024-06-10 12:08:44.648776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.913 [2024-06-10 12:08:44.659585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f31b8 00:30:50.913 [2024-06-10 12:08:44.660176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.914 [2024-06-10 12:08:44.660193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.914 [2024-06-10 12:08:44.670992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5220 00:30:50.914 [2024-06-10 12:08:44.671602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.914 [2024-06-10 12:08:44.671619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.914 [2024-06-10 12:08:44.682438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f3e60 00:30:50.914 [2024-06-10 12:08:44.683037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.914 [2024-06-10 12:08:44.683053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:51.176 [2024-06-10 12:08:44.693876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f35f0 00:30:51.176 [2024-06-10 12:08:44.694468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-06-10 12:08:44.694485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:51.176 [2024-06-10 12:08:44.705299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190feb58 00:30:51.176 [2024-06-10 12:08:44.705878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-06-10 12:08:44.705897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:51.176 [2024-06-10 12:08:44.716690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f9b30 00:30:51.176 [2024-06-10 12:08:44.717293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-06-10 12:08:44.717309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:51.176 [2024-06-10 12:08:44.728058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5220 00:30:51.176 [2024-06-10 12:08:44.728644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-06-10 12:08:44.728660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:51.176 [2024-06-10 12:08:44.739463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f3e60 00:30:51.177 [2024-06-10 12:08:44.740030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.740045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.750872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f46d0 00:30:51.177 [2024-06-10 12:08:44.751474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.751490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.762298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e73e0 00:30:51.177 [2024-06-10 12:08:44.762903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.762919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.773736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e5a90 00:30:51.177 [2024-06-10 12:08:44.774326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.774342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.785131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190feb58 00:30:51.177 [2024-06-10 12:08:44.785594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.785610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.796534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fd640 00:30:51.177 [2024-06-10 12:08:44.797098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.797114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.807918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e88f8 00:30:51.177 [2024-06-10 12:08:44.808342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.808358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.819303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190ff3c8 00:30:51.177 [2024-06-10 12:08:44.819805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.819820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.830746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e99d8 00:30:51.177 [2024-06-10 12:08:44.831284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.831300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.842090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190f5378 00:30:51.177 [2024-06-10 12:08:44.842626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.842642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.853470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fe720 00:30:51.177 [2024-06-10 12:08:44.853989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.854004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.864877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190fb480 00:30:51.177 [2024-06-10 12:08:44.865376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.865392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.876277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190edd58 00:30:51.177 [2024-06-10 12:08:44.876680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.876696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.887808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e6738 00:30:51.177 [2024-06-10 12:08:44.888303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.888319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:51.177 [2024-06-10 12:08:44.899194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffbea0) with pdu=0x2000190e88f8 00:30:51.177 [2024-06-10 12:08:44.899676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-06-10 12:08:44.899692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:51.177 00:30:51.177 Latency(us) 00:30:51.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.177 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.177 nvme0n1 : 2.00 22337.11 87.25 0.00 0.00 5726.30 2744.32 15073.28 00:30:51.177 =================================================================================================================== 00:30:51.177 Total : 22337.11 87.25 0.00 0.00 5726.30 2744.32 15073.28 00:30:51.177 0 00:30:51.177 12:08:44 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:51.177 12:08:44 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:51.177 12:08:44 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:51.177 | .driver_specific 00:30:51.177 | .nvme_error 00:30:51.177 | .status_code 00:30:51.177 | .command_transient_transport_error' 00:30:51.177 12:08:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:51.438 12:08:45 -- host/digest.sh@71 -- # (( 175 > 0 )) 00:30:51.438 12:08:45 -- host/digest.sh@73 -- # killprocess 2142578 00:30:51.438 12:08:45 -- common/autotest_common.sh@926 -- # '[' -z 2142578 ']' 00:30:51.438 12:08:45 -- common/autotest_common.sh@930 -- # kill -0 2142578 00:30:51.438 12:08:45 -- common/autotest_common.sh@931 -- # uname 00:30:51.438 12:08:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:51.438 12:08:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2142578 00:30:51.438 12:08:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:51.438 12:08:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:51.438 12:08:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2142578' 00:30:51.438 killing process with pid 2142578 00:30:51.438 12:08:45 -- common/autotest_common.sh@945 -- # kill 2142578 00:30:51.438 Received shutdown signal, test time was about 2.000000 seconds 00:30:51.438 00:30:51.438 Latency(us) 00:30:51.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.438 =================================================================================================================== 00:30:51.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.438 12:08:45 -- common/autotest_common.sh@950 -- # wait 2142578 00:30:51.699 12:08:45 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:30:51.699 12:08:45 -- host/digest.sh@54 -- # local rw bs qd 00:30:51.699 12:08:45 -- host/digest.sh@56 -- # rw=randwrite 00:30:51.699 12:08:45 -- host/digest.sh@56 -- # bs=131072 00:30:51.699 12:08:45 -- host/digest.sh@56 -- # qd=16 00:30:51.699 12:08:45 -- host/digest.sh@58 -- # bperfpid=2143269 00:30:51.699 12:08:45 -- host/digest.sh@60 -- # waitforlisten 2143269 /var/tmp/bperf.sock 00:30:51.699 12:08:45 -- common/autotest_common.sh@819 -- # '[' -z 2143269 ']' 00:30:51.699 12:08:45 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:51.699 12:08:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:51.699 12:08:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:51.699 12:08:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:51.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:51.699 12:08:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:51.699 12:08:45 -- common/autotest_common.sh@10 -- # set +x 00:30:51.699 [2024-06-10 12:08:45.297265] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:51.699 [2024-06-10 12:08:45.297320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143269 ] 00:30:51.699 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:51.699 Zero copy mechanism will not be used. 00:30:51.699 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.699 [2024-06-10 12:08:45.374090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.699 [2024-06-10 12:08:45.424638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.324 12:08:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:52.324 12:08:46 -- common/autotest_common.sh@852 -- # return 0 00:30:52.324 12:08:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:52.324 12:08:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:52.585 12:08:46 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:52.585 12:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.585 12:08:46 -- common/autotest_common.sh@10 -- # set +x 00:30:52.585 12:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.585 12:08:46 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.585 12:08:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.846 nvme0n1 00:30:52.846 12:08:46 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:52.846 12:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.846 12:08:46 -- common/autotest_common.sh@10 -- # set +x 00:30:52.846 12:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.846 12:08:46 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:52.846 12:08:46 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:52.846 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:52.846 Zero copy mechanism will not be used. 00:30:52.846 Running I/O for 2 seconds... 00:30:53.108 [2024-06-10 12:08:46.629758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.108 [2024-06-10 12:08:46.630023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.108 [2024-06-10 12:08:46.630052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.108 [2024-06-10 12:08:46.639978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.108 [2024-06-10 12:08:46.640239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.108 [2024-06-10 12:08:46.640266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.108 [2024-06-10 12:08:46.646665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.108 [2024-06-10 12:08:46.646743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.108 [2024-06-10 12:08:46.646759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.108 [2024-06-10 12:08:46.654818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.108 [2024-06-10 12:08:46.655046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.655062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.665124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.665191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.665211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.675821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.676060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.676077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.686392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.686648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.686666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.693458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.693536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.693551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.703467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.703719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.703737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.708182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.708264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.708280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.713123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.713184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.713200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.721094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.721158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.721174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.727143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.727226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.727241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.730575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.730657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.730673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.734195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.734289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.734305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.737741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.737896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.737913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.741240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.741399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.741415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.744799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.744897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.744913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.748530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.748618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.748633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.754775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.754859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.754875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.758634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.758704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.758719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.762268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.762339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.762355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.766573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.766760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.766776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.770833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.770983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.770999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.774203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.774305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.774321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.777718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.777775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.777790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.781158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.781220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.781236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.785831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.785918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.785933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.790471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.790548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.790563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.794325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.109 [2024-06-10 12:08:46.794448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.109 [2024-06-10 12:08:46.794463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.109 [2024-06-10 12:08:46.800325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.800475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.800493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.809162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.809539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.809556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.815136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.815291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.815306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.821164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.821237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.821258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.825449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.825542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.825557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.829326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.829396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.829411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.833078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.833145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.833160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.836913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.836997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.837012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.840496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.840574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.840589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.843988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.844135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.844151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.847342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.847442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.847457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.850639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.850711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.850727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.854339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.854430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.854445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.857656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.857729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.857744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.860988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.861061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.861076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.864393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.864541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.864556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.867818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.867954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.867970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.871268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.871410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.871427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.110 [2024-06-10 12:08:46.877529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.110 [2024-06-10 12:08:46.877724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.110 [2024-06-10 12:08:46.877740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.372 [2024-06-10 12:08:46.881899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.372 [2024-06-10 12:08:46.882003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.372 [2024-06-10 12:08:46.882019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.372 [2024-06-10 12:08:46.886332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.372 [2024-06-10 12:08:46.886433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.372 [2024-06-10 12:08:46.886449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.372 [2024-06-10 12:08:46.891553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.372 [2024-06-10 12:08:46.891793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.372 [2024-06-10 12:08:46.891810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.372 [2024-06-10 12:08:46.895613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.372 [2024-06-10 12:08:46.895683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.372 [2024-06-10 12:08:46.895699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.372 [2024-06-10 12:08:46.899043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.372 [2024-06-10 12:08:46.899120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.372 [2024-06-10 12:08:46.899135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.372 [2024-06-10 12:08:46.902432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.372 [2024-06-10 12:08:46.902544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.372 [2024-06-10 12:08:46.902561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.372 [2024-06-10 12:08:46.905939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.372 [2024-06-10 12:08:46.906082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.372 [2024-06-10 12:08:46.906098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.372 [2024-06-10 12:08:46.909319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.372 [2024-06-10 12:08:46.909423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.909443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.912848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.912925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.912940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.916342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.916443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.916459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.919764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.919844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.919860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.923070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.923149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.923164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.926756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.926892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.926907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.930173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.930287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.930303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.933657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.933798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.933814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.937330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.937414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.937429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.940902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.940993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.941008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.945222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.945307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.945322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.950712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.950769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.950784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.956808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.956866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.956881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.964850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.964919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.964935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.970168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.970267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.970286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.975936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.976070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.976087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.980451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.980580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.980596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.986374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.986471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.986487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:46.995195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:46.995280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:46.995295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:47.000144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:47.000209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:47.000224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:47.005338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:47.005454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:47.005469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:47.012970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:47.013027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:47.013042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:47.018710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:47.018833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:47.018848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:47.024367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:47.024497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:47.024513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:47.032039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:47.032402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:47.032418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:47.038172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:47.038254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:47.038269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:47.043221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:47.043368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.373 [2024-06-10 12:08:47.043386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.373 [2024-06-10 12:08:47.048093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.373 [2024-06-10 12:08:47.048152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.048167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.053034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.053090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.053105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.058731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.058829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.058845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.065830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.065904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.065919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.071901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.072062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.072078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.079653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.079781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.079797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.084642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.084742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.084757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.089161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.089219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.089234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.093466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.093538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.093552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.097010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.097095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.097110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.100496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.100609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.100624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.103943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.104016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.104032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.107466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.107620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.107636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.110902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.111037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.111052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.114384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.114511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.114527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.117735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.117820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.117835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.121055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.121161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.121176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.124394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.124514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.124530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.127785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.127907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.127923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.131708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.131817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.131833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.135142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.135301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.135317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.374 [2024-06-10 12:08:47.138497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.374 [2024-06-10 12:08:47.138612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.374 [2024-06-10 12:08:47.138628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.143210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.143378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.143394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.149355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.149691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.149707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.157670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.157965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.157981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.167226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.167453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.167472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.176630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.176814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.176830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.186734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.186873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.186889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.197157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.197446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.197463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.207772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.207989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.208005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.217525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.217809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.217825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.227209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.227481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.227498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.237330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.237436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.237450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.248115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.248197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.248212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.258578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.258682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.258697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.268628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.268819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.268835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.279111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.279233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.279254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.285964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.286039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.286054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.290515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.290616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.290632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.294221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.636 [2024-06-10 12:08:47.294325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.636 [2024-06-10 12:08:47.294340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.636 [2024-06-10 12:08:47.297930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.298054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.298069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.301759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.301845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.301861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.306273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.306480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.306495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.315429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.315618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.315633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.324102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.324411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.324428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.334825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.335059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.335075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.344397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.344637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.344654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.354418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.354675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.354691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.364169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.364427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.364442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.374198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.374373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.374389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.384983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.385269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.385286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.395224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.395469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.395489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.637 [2024-06-10 12:08:47.404826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.637 [2024-06-10 12:08:47.404946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.637 [2024-06-10 12:08:47.404961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.411981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.412041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.412057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.416734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.416820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.416836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.421857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.421913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.421928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.427339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.427413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.427429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.431215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.431288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.431303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.434790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.434899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.434914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.439271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.439539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.439555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.447104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.447203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.447218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.455966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.456046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.456061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.899 [2024-06-10 12:08:47.465359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.899 [2024-06-10 12:08:47.465542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.899 [2024-06-10 12:08:47.465557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.474448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.474711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.474728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.481062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.481189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.481204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.484698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.484768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.484782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.488132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.488246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.488262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.491573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.491694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.491710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.494972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.495094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.495113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.498502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.498601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.498616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.505079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.505285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.505301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.515279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.515566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.515583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.524200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.524452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.524468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.533744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.534060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.534077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.543191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.543531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.543547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.552217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.552511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.552527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.561678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.561875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.561891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.569708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.570029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.570046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.578732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.578862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.578878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.587177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.587494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.587509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.597461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.597785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.597801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.604098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.604196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.604212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.607928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.608027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.608042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.611353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.611480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.611496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.615067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.615236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.615256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.619164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.619397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.619413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.624609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.624933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.624950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.632266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.632512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.632527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.641256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.641416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.641432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.900 [2024-06-10 12:08:47.650337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.900 [2024-06-10 12:08:47.650610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.900 [2024-06-10 12:08:47.650626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.901 [2024-06-10 12:08:47.658378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.901 [2024-06-10 12:08:47.658575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.901 [2024-06-10 12:08:47.658591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.901 [2024-06-10 12:08:47.667013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:53.901 [2024-06-10 12:08:47.667285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.901 [2024-06-10 12:08:47.667300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.162 [2024-06-10 12:08:47.676181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.162 [2024-06-10 12:08:47.676465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.162 [2024-06-10 12:08:47.676482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.162 [2024-06-10 12:08:47.685403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.162 [2024-06-10 12:08:47.685717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.162 [2024-06-10 12:08:47.685734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.162 [2024-06-10 12:08:47.694221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.162 [2024-06-10 12:08:47.694430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.162 [2024-06-10 12:08:47.694452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.703069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.703180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.703196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.713276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.713523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.713539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.722705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.722808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.722823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.727257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.727376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.727392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.730843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.730966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.730982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.734289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.734439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.734454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.737726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.737809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.737824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.741193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.741302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.741317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.744625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.744709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.744724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.747987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.748083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.748098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.751609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.751789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.751805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.756831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.756930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.756945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.760256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.760371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.760387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.763669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.763789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.763804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.767000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.767073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.767088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.770381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.770482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.770498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.773683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.773769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.773784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.777627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.777743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.777759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.783970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.784096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.784112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.789026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.789184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.789199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.793252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.793637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.793653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.797908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.798031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.798047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.801689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.801746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.801762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.807111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.807189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.807204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.812307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.812385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.812400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.815687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.815782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.815800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.163 [2024-06-10 12:08:47.819055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.163 [2024-06-10 12:08:47.819130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.163 [2024-06-10 12:08:47.819146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.822550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.822661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.822678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.826749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.826854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.826870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.831590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.831711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.831727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.834911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.834987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.835002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.838355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.838451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.838466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.841662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.841735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.841749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.845092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.845188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.845203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.851614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.851700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.851715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.855488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.855601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.855616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.859874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.859997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.860012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.863376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.863501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.863516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.866759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.866833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.866849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.870160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.870267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.870282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.873570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.873640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.873655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.877241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.877349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.877365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.881481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.881585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.881601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.885049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.885146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.885161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.891061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.891175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.891191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.895206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.895332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.895347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.899062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.899136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.899151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.902457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.902560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.902576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.905816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.905886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.905901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.909222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.909321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.909336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.912568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.912641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.912655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.916021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.916132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.916149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.919410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.919532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.919547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.923369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.164 [2024-06-10 12:08:47.923483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.164 [2024-06-10 12:08:47.923499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.164 [2024-06-10 12:08:47.926732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.165 [2024-06-10 12:08:47.926811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.165 [2024-06-10 12:08:47.926827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.165 [2024-06-10 12:08:47.930068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.165 [2024-06-10 12:08:47.930162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.165 [2024-06-10 12:08:47.930177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.933434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.933524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.933539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.936790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.936910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.936925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.942860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.943120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.943137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.951239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.951506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.951522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.958387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.958616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.958632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.964360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.964481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.964496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.967732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.967809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.967823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.971126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.971228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.971249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.974528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.974600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.974615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.977949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.978051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.978068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.981313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.981389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.981405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.985233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.985353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.985369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.988689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.988801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.988817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:47.993618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:47.993835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:47.993850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:48.001486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:48.001576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:48.001592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:48.005468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:48.005664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:48.005680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:48.013832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:48.013890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:48.013905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:48.017591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:48.017703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:48.017718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:48.022549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:48.022617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:48.022632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:48.032748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:48.033044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:48.033061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:48.042821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:48.043068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:48.043084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.427 [2024-06-10 12:08:48.052383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.427 [2024-06-10 12:08:48.052697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.427 [2024-06-10 12:08:48.052716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.062810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.062996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.063011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.073310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.073476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.073491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.083467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.083737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.083753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.093153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.093420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.093437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.103756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.103955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.103970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.109136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.109268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.109284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.112551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.112631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.112646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.116062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.116139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.116154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.119504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.119590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.119605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.123569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.123637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.123653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.127040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.127126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.127141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.130640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.130720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.130735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.138544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.138829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.138846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.145854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.145974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.145990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.151645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.151736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.151752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.157516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.157613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.157627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.161112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.161192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.161207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.164513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.164600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.164615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.167960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.168029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.168043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.171331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.171410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.171425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.174738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.174817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.174832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.178389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.178500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.178516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.181990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.182068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.182083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.185560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.185672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.185688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.191077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.191152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.191167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.428 [2024-06-10 12:08:48.196779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.428 [2024-06-10 12:08:48.196878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.428 [2024-06-10 12:08:48.196896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.690 [2024-06-10 12:08:48.201326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.690 [2024-06-10 12:08:48.201488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.690 [2024-06-10 12:08:48.201503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.690 [2024-06-10 12:08:48.207677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.690 [2024-06-10 12:08:48.207784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.690 [2024-06-10 12:08:48.207799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.690 [2024-06-10 12:08:48.214594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.690 [2024-06-10 12:08:48.214855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.690 [2024-06-10 12:08:48.214872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.690 [2024-06-10 12:08:48.220066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.690 [2024-06-10 12:08:48.220201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.690 [2024-06-10 12:08:48.220216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.690 [2024-06-10 12:08:48.224850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.690 [2024-06-10 12:08:48.224954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.690 [2024-06-10 12:08:48.224970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.690 [2024-06-10 12:08:48.229844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.690 [2024-06-10 12:08:48.229953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.690 [2024-06-10 12:08:48.229969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.690 [2024-06-10 12:08:48.235646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.690 [2024-06-10 12:08:48.235916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.690 [2024-06-10 12:08:48.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.690 [2024-06-10 12:08:48.242367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.690 [2024-06-10 12:08:48.242498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.242513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.246022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.246086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.246100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.249502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.249576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.249591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.252868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.252942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.252957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.256312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.256446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.256462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.260461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.260580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.260595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.264012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.264157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.264173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.268828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.268909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.268924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.272181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.272258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.272274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.275782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.275901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.275920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.281623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.281818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.281833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.288230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.288363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.288378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.292866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.293140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.293157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.299896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.300022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.300037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.309437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.309568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.309583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.313777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.314006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.314022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.319149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.319205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.319220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.326511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.326642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.326658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.333917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.333986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.334002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.338658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.338742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.338757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.344416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.344628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.344643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.348569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.348655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.348670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.352130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.352261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.352276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.355763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.355840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.355855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.359577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.359632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.359648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.366168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.366424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.366440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.374235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.374411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.374425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.691 [2024-06-10 12:08:48.384248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.691 [2024-06-10 12:08:48.384376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.691 [2024-06-10 12:08:48.384392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.392973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.393208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.393223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.401398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.401597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.401613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.410306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.410554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.410570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.415271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.415341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.415356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.420004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.420087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.420102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.428859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.429033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.429048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.437198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.437407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.437422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.443421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.443671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.443690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.447508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.447628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.447644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.451394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.451486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.451501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.692 [2024-06-10 12:08:48.460476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.692 [2024-06-10 12:08:48.460598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.692 [2024-06-10 12:08:48.460613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.468361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.468452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.468467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.475445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.475536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.475551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.479283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.479438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.479453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.482988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.483059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.483074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.490946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.491058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.491073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.494585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.494690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.494705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.498102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.498176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.498191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.504754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.504880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.504896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.509469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.509667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.509682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.513088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.513187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.513203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.520674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.520791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.520806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.524043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.524122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.524137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.527522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.527645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.527661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.530886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.530985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.531000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.534211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.534291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.534306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.537568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.537688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.537703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.540976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.541051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.541066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.544335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.544434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.544449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.547706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.547824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.547839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.551024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.551101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.551116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.554363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.554482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.554498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.557670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.557771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.557786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.560951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.561030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.561047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.564309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.564429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.564444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.567593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.567664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.567679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.955 [2024-06-10 12:08:48.571064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.955 [2024-06-10 12:08:48.571212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.955 [2024-06-10 12:08:48.571226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.956 [2024-06-10 12:08:48.577083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.956 [2024-06-10 12:08:48.577254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.956 [2024-06-10 12:08:48.577270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.956 [2024-06-10 12:08:48.584122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.956 [2024-06-10 12:08:48.584377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.956 [2024-06-10 12:08:48.584392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.956 [2024-06-10 12:08:48.587950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.956 [2024-06-10 12:08:48.588107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.956 [2024-06-10 12:08:48.588122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.956 [2024-06-10 12:08:48.592071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.956 [2024-06-10 12:08:48.592283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.956 [2024-06-10 12:08:48.592298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.956 [2024-06-10 12:08:48.602476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.956 [2024-06-10 12:08:48.602536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.956 [2024-06-10 12:08:48.602551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.956 [2024-06-10 12:08:48.611887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.956 [2024-06-10 12:08:48.612100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.956 [2024-06-10 12:08:48.612116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.956 [2024-06-10 12:08:48.620961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xffc040) with pdu=0x2000190fef90 00:30:54.956 [2024-06-10 12:08:48.621229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.956 [2024-06-10 12:08:48.621249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.956 00:30:54.956 Latency(us) 00:30:54.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.956 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:54.956 nvme0n1 : 2.01 5583.86 697.98 0.00 0.00 2859.63 1460.91 11741.87 00:30:54.956 =================================================================================================================== 00:30:54.956 Total : 5583.86 697.98 0.00 0.00 2859.63 1460.91 11741.87 00:30:54.956 0 00:30:54.956 12:08:48 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:54.956 12:08:48 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:54.956 12:08:48 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:54.956 | .driver_specific 00:30:54.956 | .nvme_error 00:30:54.956 | .status_code 00:30:54.956 | .command_transient_transport_error' 00:30:54.956 12:08:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:55.217 12:08:48 -- host/digest.sh@71 -- # (( 361 > 0 )) 00:30:55.217 12:08:48 -- host/digest.sh@73 -- # killprocess 2143269 00:30:55.217 12:08:48 -- common/autotest_common.sh@926 -- # '[' -z 2143269 ']' 00:30:55.217 12:08:48 -- common/autotest_common.sh@930 -- # kill -0 2143269 00:30:55.217 12:08:48 -- common/autotest_common.sh@931 -- # uname 00:30:55.217 12:08:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:55.217 12:08:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2143269 00:30:55.217 12:08:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:55.217 12:08:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:55.217 12:08:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2143269' 00:30:55.217 killing process with pid 2143269 00:30:55.217 12:08:48 -- common/autotest_common.sh@945 -- # kill 2143269 00:30:55.217 Received shutdown signal, test time was about 2.000000 seconds 00:30:55.217 00:30:55.217 Latency(us) 00:30:55.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.217 =================================================================================================================== 00:30:55.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.217 12:08:48 -- common/autotest_common.sh@950 -- # wait 2143269 00:30:55.217 12:08:48 -- host/digest.sh@115 -- # killprocess 2140841 00:30:55.217 12:08:48 -- common/autotest_common.sh@926 -- # '[' -z 2140841 ']' 00:30:55.217 12:08:48 -- common/autotest_common.sh@930 -- # kill -0 2140841 00:30:55.217 12:08:48 -- common/autotest_common.sh@931 -- # uname 00:30:55.217 12:08:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:55.217 12:08:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2140841 00:30:55.478 12:08:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:55.478 12:08:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:55.478 12:08:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2140841' 00:30:55.478 killing process with pid 2140841 00:30:55.478 12:08:49 -- common/autotest_common.sh@945 -- # kill 2140841 00:30:55.478 12:08:49 -- common/autotest_common.sh@950 -- # wait 2140841 00:30:55.478 00:30:55.478 real 0m15.847s 00:30:55.478 user 0m30.874s 00:30:55.478 sys 0m3.330s 00:30:55.478 12:08:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.478 12:08:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.478 ************************************ 00:30:55.478 END TEST nvmf_digest_error 00:30:55.478 ************************************ 00:30:55.478 12:08:49 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:30:55.478 12:08:49 -- host/digest.sh@139 -- # nvmftestfini 00:30:55.478 12:08:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:55.478 12:08:49 -- nvmf/common.sh@116 -- # sync 00:30:55.478 12:08:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:55.478 12:08:49 -- nvmf/common.sh@119 -- # set +e 00:30:55.478 12:08:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:55.478 12:08:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:55.478 rmmod nvme_tcp 00:30:55.478 rmmod nvme_fabrics 00:30:55.478 rmmod nvme_keyring 00:30:55.478 12:08:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:55.739 12:08:49 -- nvmf/common.sh@123 -- # set -e 00:30:55.739 12:08:49 -- nvmf/common.sh@124 -- # return 0 00:30:55.739 12:08:49 -- nvmf/common.sh@477 -- # '[' -n 2140841 ']' 00:30:55.739 12:08:49 -- nvmf/common.sh@478 -- # killprocess 2140841 00:30:55.739 12:08:49 -- common/autotest_common.sh@926 -- # '[' -z 2140841 ']' 00:30:55.739 12:08:49 -- common/autotest_common.sh@930 -- # kill -0 2140841 00:30:55.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2140841) - No such process 00:30:55.739 12:08:49 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2140841 is not found' 00:30:55.739 Process with pid 2140841 is not found 00:30:55.739 12:08:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:55.739 12:08:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:55.739 12:08:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:55.739 12:08:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:55.739 12:08:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:55.739 12:08:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.739 12:08:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:55.739 12:08:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.653 12:08:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:57.653 00:30:57.653 real 0m41.344s 00:30:57.653 user 1m4.273s 00:30:57.653 sys 0m12.028s 00:30:57.653 12:08:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:57.653 12:08:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.653 ************************************ 00:30:57.653 END TEST nvmf_digest 00:30:57.653 ************************************ 00:30:57.653 12:08:51 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:30:57.653 12:08:51 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:30:57.653 12:08:51 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:30:57.653 12:08:51 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:57.653 12:08:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:57.653 12:08:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:57.653 12:08:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.653 ************************************ 00:30:57.653 START TEST nvmf_bdevperf 00:30:57.653 ************************************ 00:30:57.653 12:08:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:57.914 * Looking for test storage... 00:30:57.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:57.914 12:08:51 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.914 12:08:51 -- nvmf/common.sh@7 -- # uname -s 00:30:57.914 12:08:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.914 12:08:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.914 12:08:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.914 12:08:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.914 12:08:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.914 12:08:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.914 12:08:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.914 12:08:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.914 12:08:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.914 12:08:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.914 12:08:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:57.914 12:08:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:57.914 12:08:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.914 12:08:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.914 12:08:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.914 12:08:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.914 12:08:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.915 12:08:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.915 12:08:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.915 12:08:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.915 12:08:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.915 12:08:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.915 12:08:51 -- paths/export.sh@5 -- # export PATH 00:30:57.915 12:08:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.915 12:08:51 -- nvmf/common.sh@46 -- # : 0 00:30:57.915 12:08:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:57.915 12:08:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:57.915 12:08:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:57.915 12:08:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.915 12:08:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.915 12:08:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:57.915 12:08:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:57.915 12:08:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:57.915 12:08:51 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:57.915 12:08:51 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:57.915 12:08:51 -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:57.915 12:08:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:57.915 12:08:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.915 12:08:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:57.915 12:08:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:57.915 12:08:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:57.915 12:08:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.915 12:08:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:57.915 12:08:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.915 12:08:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:57.915 12:08:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:57.915 12:08:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:57.915 12:08:51 -- common/autotest_common.sh@10 -- # set +x 00:31:04.505 12:08:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:04.505 12:08:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:04.505 12:08:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:04.505 12:08:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:04.505 12:08:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:04.505 12:08:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:04.505 12:08:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:04.505 12:08:58 -- nvmf/common.sh@294 -- # net_devs=() 00:31:04.505 12:08:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:04.505 12:08:58 -- nvmf/common.sh@295 -- # e810=() 00:31:04.505 12:08:58 -- nvmf/common.sh@295 -- # local -ga e810 00:31:04.505 12:08:58 -- nvmf/common.sh@296 -- # x722=() 00:31:04.505 12:08:58 -- nvmf/common.sh@296 -- # local -ga x722 00:31:04.505 12:08:58 -- nvmf/common.sh@297 -- # mlx=() 00:31:04.505 12:08:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:04.505 12:08:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.505 12:08:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:04.505 12:08:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:04.505 12:08:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:04.505 12:08:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:04.505 12:08:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:04.505 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:04.505 12:08:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:04.505 12:08:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:04.505 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:04.505 12:08:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:04.505 12:08:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:04.505 12:08:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.505 12:08:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:04.505 12:08:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.505 12:08:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:04.505 Found net devices under 0000:31:00.0: cvl_0_0 00:31:04.505 12:08:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.505 12:08:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:04.505 12:08:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.505 12:08:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:04.505 12:08:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.505 12:08:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:04.505 Found net devices under 0000:31:00.1: cvl_0_1 00:31:04.505 12:08:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.505 12:08:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:04.505 12:08:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:04.505 12:08:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:04.505 12:08:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:04.505 12:08:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.505 12:08:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.505 12:08:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.505 12:08:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:04.505 12:08:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.505 12:08:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.505 12:08:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:04.505 12:08:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.505 12:08:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.505 12:08:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:04.505 12:08:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:04.505 12:08:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.505 12:08:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.505 12:08:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.505 12:08:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.505 12:08:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:04.505 12:08:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.767 12:08:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.767 12:08:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.767 12:08:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:04.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:31:04.767 00:31:04.767 --- 10.0.0.2 ping statistics --- 00:31:04.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.767 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:31:04.767 12:08:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:31:04.767 00:31:04.767 --- 10.0.0.1 ping statistics --- 00:31:04.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.767 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:31:04.767 12:08:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.767 12:08:58 -- nvmf/common.sh@410 -- # return 0 00:31:04.767 12:08:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:04.767 12:08:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.767 12:08:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:04.767 12:08:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:04.767 12:08:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.767 12:08:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:04.767 12:08:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:04.767 12:08:58 -- host/bdevperf.sh@25 -- # tgt_init 00:31:04.767 12:08:58 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:04.767 12:08:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:04.767 12:08:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:04.767 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.767 12:08:58 -- nvmf/common.sh@469 -- # nvmfpid=2148068 00:31:04.767 12:08:58 -- nvmf/common.sh@470 -- # waitforlisten 2148068 00:31:04.767 12:08:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:04.767 12:08:58 -- common/autotest_common.sh@819 -- # '[' -z 2148068 ']' 00:31:04.767 12:08:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.767 12:08:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:04.767 12:08:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.767 12:08:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:04.767 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.767 [2024-06-10 12:08:58.490587] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:04.767 [2024-06-10 12:08:58.490652] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.767 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.027 [2024-06-10 12:08:58.577401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:05.027 [2024-06-10 12:08:58.670316] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:05.027 [2024-06-10 12:08:58.670485] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.027 [2024-06-10 12:08:58.670497] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.027 [2024-06-10 12:08:58.670506] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.027 [2024-06-10 12:08:58.670695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.027 [2024-06-10 12:08:58.670829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:05.027 [2024-06-10 12:08:58.670830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.598 12:08:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:05.598 12:08:59 -- common/autotest_common.sh@852 -- # return 0 00:31:05.598 12:08:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:05.598 12:08:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:05.598 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.598 12:08:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.598 12:08:59 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:05.598 12:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:05.598 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.598 [2024-06-10 12:08:59.296620] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.598 12:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:05.598 12:08:59 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:05.598 12:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:05.598 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.598 Malloc0 00:31:05.598 12:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:05.598 12:08:59 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:05.598 12:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:05.598 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.598 12:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:05.598 12:08:59 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:05.598 12:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:05.598 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.598 12:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:05.598 12:08:59 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.598 12:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:05.598 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.598 [2024-06-10 12:08:59.347772] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.598 12:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:05.598 12:08:59 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:05.598 12:08:59 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:05.598 12:08:59 -- nvmf/common.sh@520 -- # config=() 00:31:05.598 12:08:59 -- nvmf/common.sh@520 -- # local subsystem config 00:31:05.598 12:08:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:05.598 12:08:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:05.598 { 00:31:05.598 "params": { 00:31:05.598 "name": "Nvme$subsystem", 00:31:05.598 "trtype": "$TEST_TRANSPORT", 00:31:05.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.598 "adrfam": "ipv4", 00:31:05.598 "trsvcid": "$NVMF_PORT", 00:31:05.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.598 "hdgst": ${hdgst:-false}, 00:31:05.598 "ddgst": ${ddgst:-false} 00:31:05.598 }, 00:31:05.598 "method": "bdev_nvme_attach_controller" 00:31:05.598 } 00:31:05.598 EOF 00:31:05.598 )") 00:31:05.598 12:08:59 -- nvmf/common.sh@542 -- # cat 00:31:05.598 12:08:59 -- nvmf/common.sh@544 -- # jq . 00:31:05.598 12:08:59 -- nvmf/common.sh@545 -- # IFS=, 00:31:05.598 12:08:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:05.598 "params": { 00:31:05.598 "name": "Nvme1", 00:31:05.598 "trtype": "tcp", 00:31:05.598 "traddr": "10.0.0.2", 00:31:05.598 "adrfam": "ipv4", 00:31:05.598 "trsvcid": "4420", 00:31:05.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.598 "hdgst": false, 00:31:05.598 "ddgst": false 00:31:05.598 }, 00:31:05.598 "method": "bdev_nvme_attach_controller" 00:31:05.598 }' 00:31:05.858 [2024-06-10 12:08:59.397296] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:05.858 [2024-06-10 12:08:59.397349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148402 ] 00:31:05.858 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.858 [2024-06-10 12:08:59.456816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.858 [2024-06-10 12:08:59.519716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.117 Running I/O for 1 seconds... 00:31:07.057 00:31:07.057 Latency(us) 00:31:07.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.057 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:07.057 Verification LBA range: start 0x0 length 0x4000 00:31:07.057 Nvme1n1 : 1.01 13914.12 54.35 0.00 0.00 9156.52 1201.49 16384.00 00:31:07.057 =================================================================================================================== 00:31:07.057 Total : 13914.12 54.35 0.00 0.00 9156.52 1201.49 16384.00 00:31:07.318 12:09:00 -- host/bdevperf.sh@30 -- # bdevperfpid=2148782 00:31:07.318 12:09:00 -- host/bdevperf.sh@32 -- # sleep 3 00:31:07.318 12:09:00 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:07.318 12:09:00 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:07.318 12:09:00 -- nvmf/common.sh@520 -- # config=() 00:31:07.318 12:09:00 -- nvmf/common.sh@520 -- # local subsystem config 00:31:07.318 12:09:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:07.318 12:09:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:07.318 { 00:31:07.318 "params": { 00:31:07.318 "name": "Nvme$subsystem", 00:31:07.318 "trtype": "$TEST_TRANSPORT", 00:31:07.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.318 "adrfam": "ipv4", 00:31:07.318 "trsvcid": "$NVMF_PORT", 00:31:07.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.318 "hdgst": ${hdgst:-false}, 00:31:07.318 "ddgst": ${ddgst:-false} 00:31:07.318 }, 00:31:07.318 "method": "bdev_nvme_attach_controller" 00:31:07.318 } 00:31:07.318 EOF 00:31:07.318 )") 00:31:07.318 12:09:00 -- nvmf/common.sh@542 -- # cat 00:31:07.318 12:09:00 -- nvmf/common.sh@544 -- # jq . 00:31:07.318 12:09:00 -- nvmf/common.sh@545 -- # IFS=, 00:31:07.318 12:09:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:07.318 "params": { 00:31:07.318 "name": "Nvme1", 00:31:07.318 "trtype": "tcp", 00:31:07.318 "traddr": "10.0.0.2", 00:31:07.318 "adrfam": "ipv4", 00:31:07.318 "trsvcid": "4420", 00:31:07.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.318 "hdgst": false, 00:31:07.318 "ddgst": false 00:31:07.318 }, 00:31:07.318 "method": "bdev_nvme_attach_controller" 00:31:07.318 }' 00:31:07.318 [2024-06-10 12:09:00.976127] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:07.318 [2024-06-10 12:09:00.976198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148782 ] 00:31:07.318 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.318 [2024-06-10 12:09:01.036133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.578 [2024-06-10 12:09:01.098369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.578 Running I/O for 15 seconds... 00:31:10.882 12:09:03 -- host/bdevperf.sh@33 -- # kill -9 2148068 00:31:10.882 12:09:03 -- host/bdevperf.sh@35 -- # sleep 3 00:31:10.882 [2024-06-10 12:09:03.942539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.942985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.942995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.943006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.882 [2024-06-10 12:09:03.943016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.882 [2024-06-10 12:09:03.943026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.883 [2024-06-10 12:09:03.943811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.883 [2024-06-10 12:09:03.943821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.943830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.943848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.943870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.943889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.943908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.943924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.943940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.943956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.943972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.943988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.943996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.884 [2024-06-10 12:09:03.944459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.884 [2024-06-10 12:09:03.944475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.884 [2024-06-10 12:09:03.944485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.885 [2024-06-10 12:09:03.944513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.885 [2024-06-10 12:09:03.944529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.885 [2024-06-10 12:09:03.944545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.885 [2024-06-10 12:09:03.944626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.885 [2024-06-10 12:09:03.944642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.885 [2024-06-10 12:09:03.944674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.885 [2024-06-10 12:09:03.944905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd24810 is same with the state(5) to be set 00:31:10.885 [2024-06-10 12:09:03.944922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:10.885 [2024-06-10 12:09:03.944928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:10.885 [2024-06-10 12:09:03.944935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64768 len:8 PRP1 0x0 PRP2 0x0 00:31:10.885 [2024-06-10 12:09:03.944942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.885 [2024-06-10 12:09:03.944980] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd24810 was disconnected and freed. reset controller. 00:31:10.885 [2024-06-10 12:09:03.947344] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.885 [2024-06-10 12:09:03.947389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.885 [2024-06-10 12:09:03.947968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.885 [2024-06-10 12:09:03.948483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.885 [2024-06-10 12:09:03.948519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.885 [2024-06-10 12:09:03.948529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.885 [2024-06-10 12:09:03.948657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.885 [2024-06-10 12:09:03.948838] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.885 [2024-06-10 12:09:03.948846] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.885 [2024-06-10 12:09:03.948855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.885 [2024-06-10 12:09:03.951340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.885 [2024-06-10 12:09:03.960125] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.885 [2024-06-10 12:09:03.960771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.885 [2024-06-10 12:09:03.961009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.885 [2024-06-10 12:09:03.961019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.885 [2024-06-10 12:09:03.961027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.885 [2024-06-10 12:09:03.961174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.885 [2024-06-10 12:09:03.961370] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.885 [2024-06-10 12:09:03.961379] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.885 [2024-06-10 12:09:03.961387] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.885 [2024-06-10 12:09:03.963855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.885 [2024-06-10 12:09:03.972722] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.885 [2024-06-10 12:09:03.973256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.885 [2024-06-10 12:09:03.973597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.885 [2024-06-10 12:09:03.973634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.885 [2024-06-10 12:09:03.973646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.885 [2024-06-10 12:09:03.973820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.886 [2024-06-10 12:09:03.974016] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.886 [2024-06-10 12:09:03.974025] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.886 [2024-06-10 12:09:03.974033] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.886 [2024-06-10 12:09:03.976234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.886 [2024-06-10 12:09:03.985540] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.886 [2024-06-10 12:09:03.986185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:03.986573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:03.986586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.886 [2024-06-10 12:09:03.986596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.886 [2024-06-10 12:09:03.986805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.886 [2024-06-10 12:09:03.986976] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.886 [2024-06-10 12:09:03.986985] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.886 [2024-06-10 12:09:03.986992] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.886 [2024-06-10 12:09:03.989468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.886 [2024-06-10 12:09:03.998224] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.886 [2024-06-10 12:09:03.998822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:03.999176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:03.999186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.886 [2024-06-10 12:09:03.999194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.886 [2024-06-10 12:09:03.999364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.886 [2024-06-10 12:09:03.999533] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.886 [2024-06-10 12:09:03.999540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.886 [2024-06-10 12:09:03.999547] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.886 [2024-06-10 12:09:04.001763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.886 [2024-06-10 12:09:04.011049] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.886 [2024-06-10 12:09:04.011501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.011872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.011882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.886 [2024-06-10 12:09:04.011893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.886 [2024-06-10 12:09:04.012064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.886 [2024-06-10 12:09:04.012260] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.886 [2024-06-10 12:09:04.012268] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.886 [2024-06-10 12:09:04.012275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.886 [2024-06-10 12:09:04.014688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.886 [2024-06-10 12:09:04.023722] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.886 [2024-06-10 12:09:04.024781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.025105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.025117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.886 [2024-06-10 12:09:04.025125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.886 [2024-06-10 12:09:04.025305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.886 [2024-06-10 12:09:04.025413] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.886 [2024-06-10 12:09:04.025422] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.886 [2024-06-10 12:09:04.025429] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.886 [2024-06-10 12:09:04.027694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.886 [2024-06-10 12:09:04.036492] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.886 [2024-06-10 12:09:04.036991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.037345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.037356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.886 [2024-06-10 12:09:04.037363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.886 [2024-06-10 12:09:04.037531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.886 [2024-06-10 12:09:04.037641] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.886 [2024-06-10 12:09:04.037648] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.886 [2024-06-10 12:09:04.037655] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.886 [2024-06-10 12:09:04.040136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.886 [2024-06-10 12:09:04.049114] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.886 [2024-06-10 12:09:04.049584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.049965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.049975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.886 [2024-06-10 12:09:04.049982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.886 [2024-06-10 12:09:04.050178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.886 [2024-06-10 12:09:04.050374] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.886 [2024-06-10 12:09:04.050383] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.886 [2024-06-10 12:09:04.050390] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.886 [2024-06-10 12:09:04.052977] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.886 [2024-06-10 12:09:04.061770] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.886 [2024-06-10 12:09:04.062268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.062699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.062709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.886 [2024-06-10 12:09:04.062716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.886 [2024-06-10 12:09:04.062828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.886 [2024-06-10 12:09:04.062998] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.886 [2024-06-10 12:09:04.063006] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.886 [2024-06-10 12:09:04.063013] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.886 [2024-06-10 12:09:04.065424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.886 [2024-06-10 12:09:04.074545] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.886 [2024-06-10 12:09:04.075140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.075546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.886 [2024-06-10 12:09:04.075561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.075570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.075721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.075831] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.075839] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.075848] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.887 [2024-06-10 12:09:04.078287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.887 [2024-06-10 12:09:04.087231] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.887 [2024-06-10 12:09:04.087634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.087994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.088004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.088012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.088147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.088279] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.088287] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.088294] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.887 [2024-06-10 12:09:04.090665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.887 [2024-06-10 12:09:04.099709] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.887 [2024-06-10 12:09:04.100252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.100431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.100440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.100448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.100603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.100694] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.100702] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.100708] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.887 [2024-06-10 12:09:04.103111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.887 [2024-06-10 12:09:04.112423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.887 [2024-06-10 12:09:04.112875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.113221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.113231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.113238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.113393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.113583] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.113590] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.113597] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.887 [2024-06-10 12:09:04.115963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.887 [2024-06-10 12:09:04.125093] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.887 [2024-06-10 12:09:04.125623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.125945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.125959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.125968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.126158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.126317] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.126331] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.126338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.887 [2024-06-10 12:09:04.128740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.887 [2024-06-10 12:09:04.137833] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.887 [2024-06-10 12:09:04.138370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.138799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.138811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.138821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.138968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.139099] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.139107] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.139115] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.887 [2024-06-10 12:09:04.141400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.887 [2024-06-10 12:09:04.150431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.887 [2024-06-10 12:09:04.151073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.151465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.151480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.151489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.151618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.151794] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.151803] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.151810] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.887 [2024-06-10 12:09:04.154128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.887 [2024-06-10 12:09:04.163307] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.887 [2024-06-10 12:09:04.163792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.164146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.164156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.164163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.164316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.164444] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.164452] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.164464] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.887 [2024-06-10 12:09:04.166897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.887 [2024-06-10 12:09:04.175862] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.887 [2024-06-10 12:09:04.176372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.176773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.176782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.176790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.176979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.177171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.177179] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.177186] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.887 [2024-06-10 12:09:04.179598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.887 [2024-06-10 12:09:04.188627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.887 [2024-06-10 12:09:04.189193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.189448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.887 [2024-06-10 12:09:04.189459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.887 [2024-06-10 12:09:04.189466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.887 [2024-06-10 12:09:04.189576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.887 [2024-06-10 12:09:04.189747] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.887 [2024-06-10 12:09:04.189754] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.887 [2024-06-10 12:09:04.189761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.192205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.201194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.201688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.202069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.202080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.888 [2024-06-10 12:09:04.202087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.888 [2024-06-10 12:09:04.202217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.888 [2024-06-10 12:09:04.202331] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.888 [2024-06-10 12:09:04.202339] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.888 [2024-06-10 12:09:04.202346] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.204696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.213666] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.214228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.214600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.214611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.888 [2024-06-10 12:09:04.214618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.888 [2024-06-10 12:09:04.214749] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.888 [2024-06-10 12:09:04.214877] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.888 [2024-06-10 12:09:04.214884] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.888 [2024-06-10 12:09:04.214891] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.217305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.226356] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.226857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.227174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.227185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.888 [2024-06-10 12:09:04.227192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.888 [2024-06-10 12:09:04.227422] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.888 [2024-06-10 12:09:04.227593] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.888 [2024-06-10 12:09:04.227601] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.888 [2024-06-10 12:09:04.227608] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.229876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.238859] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.239484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.239865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.239878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.888 [2024-06-10 12:09:04.239887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.888 [2024-06-10 12:09:04.240087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.888 [2024-06-10 12:09:04.240215] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.888 [2024-06-10 12:09:04.240224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.888 [2024-06-10 12:09:04.240231] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.242737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.251556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.252020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.252480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.252516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.888 [2024-06-10 12:09:04.252527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.888 [2024-06-10 12:09:04.252693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.888 [2024-06-10 12:09:04.252862] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.888 [2024-06-10 12:09:04.252871] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.888 [2024-06-10 12:09:04.252878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.255277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.264121] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.264633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.264982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.264992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.888 [2024-06-10 12:09:04.265000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.888 [2024-06-10 12:09:04.265070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.888 [2024-06-10 12:09:04.265267] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.888 [2024-06-10 12:09:04.265275] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.888 [2024-06-10 12:09:04.265282] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.267549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.276737] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.277325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.277639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.277652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.888 [2024-06-10 12:09:04.277661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.888 [2024-06-10 12:09:04.277851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.888 [2024-06-10 12:09:04.278001] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.888 [2024-06-10 12:09:04.278010] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.888 [2024-06-10 12:09:04.278017] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.280421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.289294] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.289826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.290255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.290266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.888 [2024-06-10 12:09:04.290274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.888 [2024-06-10 12:09:04.290427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.888 [2024-06-10 12:09:04.290536] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.888 [2024-06-10 12:09:04.290544] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.888 [2024-06-10 12:09:04.290551] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.293002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.301929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.302554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.302930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.302943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.888 [2024-06-10 12:09:04.302953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.888 [2024-06-10 12:09:04.303102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.888 [2024-06-10 12:09:04.303300] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.888 [2024-06-10 12:09:04.303310] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.888 [2024-06-10 12:09:04.303317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.888 [2024-06-10 12:09:04.305839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.888 [2024-06-10 12:09:04.314531] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.888 [2024-06-10 12:09:04.315010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.888 [2024-06-10 12:09:04.315407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.315424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.315433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.315600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.315736] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.315744] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.889 [2024-06-10 12:09:04.315752] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.889 [2024-06-10 12:09:04.318021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.889 [2024-06-10 12:09:04.327136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.889 [2024-06-10 12:09:04.327785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.328035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.328052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.328061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.328256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.328429] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.328438] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.889 [2024-06-10 12:09:04.328445] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.889 [2024-06-10 12:09:04.330730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.889 [2024-06-10 12:09:04.339997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.889 [2024-06-10 12:09:04.340508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.340929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.340943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.340952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.341123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.341283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.341292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.889 [2024-06-10 12:09:04.341300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.889 [2024-06-10 12:09:04.343668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.889 [2024-06-10 12:09:04.352792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.889 [2024-06-10 12:09:04.353179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.353579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.353590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.353598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.353745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.353889] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.353897] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.889 [2024-06-10 12:09:04.353904] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.889 [2024-06-10 12:09:04.356265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.889 [2024-06-10 12:09:04.365413] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.889 [2024-06-10 12:09:04.366025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.366445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.366482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.366498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.366695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.366897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.366906] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.889 [2024-06-10 12:09:04.366913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.889 [2024-06-10 12:09:04.369234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.889 [2024-06-10 12:09:04.378159] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.889 [2024-06-10 12:09:04.378691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.379079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.379089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.379097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.379228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.379382] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.379390] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.889 [2024-06-10 12:09:04.379397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.889 [2024-06-10 12:09:04.381715] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.889 [2024-06-10 12:09:04.390830] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.889 [2024-06-10 12:09:04.391392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.391699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.391712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.391721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.391908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.392018] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.392027] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.889 [2024-06-10 12:09:04.392034] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.889 [2024-06-10 12:09:04.394354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.889 [2024-06-10 12:09:04.403351] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.889 [2024-06-10 12:09:04.403809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.404181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.404194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.404203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.404385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.404560] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.404568] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.889 [2024-06-10 12:09:04.404576] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.889 [2024-06-10 12:09:04.406918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.889 [2024-06-10 12:09:04.416022] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.889 [2024-06-10 12:09:04.416573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.416846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.416860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.416869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.417078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.417213] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.417221] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.889 [2024-06-10 12:09:04.417228] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.889 [2024-06-10 12:09:04.419588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.889 [2024-06-10 12:09:04.428335] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.889 [2024-06-10 12:09:04.428954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.429326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.889 [2024-06-10 12:09:04.429341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.889 [2024-06-10 12:09:04.429351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.889 [2024-06-10 12:09:04.429479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.889 [2024-06-10 12:09:04.429589] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.889 [2024-06-10 12:09:04.429596] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.429603] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.432200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.440772] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.441181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.441413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.441424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-06-10 12:09:04.441431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.890 [2024-06-10 12:09:04.441622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.890 [2024-06-10 12:09:04.441836] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.890 [2024-06-10 12:09:04.441844] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.441851] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.444240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.453391] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.454001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.454291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.454306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-06-10 12:09:04.454316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.890 [2024-06-10 12:09:04.454485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.890 [2024-06-10 12:09:04.454617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.890 [2024-06-10 12:09:04.454626] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.454633] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.457258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.465887] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.466524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.467020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.467033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-06-10 12:09:04.467042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.890 [2024-06-10 12:09:04.467216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.890 [2024-06-10 12:09:04.467360] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.890 [2024-06-10 12:09:04.467369] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.467376] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.469727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.478587] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.479130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.479520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.479532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-06-10 12:09:04.479539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.890 [2024-06-10 12:09:04.479674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.890 [2024-06-10 12:09:04.479845] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.890 [2024-06-10 12:09:04.479857] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.479864] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.482297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.491480] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.491976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.492458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.492495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-06-10 12:09:04.492505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.890 [2024-06-10 12:09:04.492714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.890 [2024-06-10 12:09:04.492886] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.890 [2024-06-10 12:09:04.492895] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.492903] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.495470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.504337] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.504817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.505168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.505178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-06-10 12:09:04.505185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.890 [2024-06-10 12:09:04.505302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.890 [2024-06-10 12:09:04.505453] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.890 [2024-06-10 12:09:04.505460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.505467] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.507836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.516977] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.517603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.517984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.517997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-06-10 12:09:04.518006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.890 [2024-06-10 12:09:04.518212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.890 [2024-06-10 12:09:04.518409] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.890 [2024-06-10 12:09:04.518419] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.518430] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.520841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.529619] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.530279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.530659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.530672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-06-10 12:09:04.530681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.890 [2024-06-10 12:09:04.530912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.890 [2024-06-10 12:09:04.531067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.890 [2024-06-10 12:09:04.531076] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.531083] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.533421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.542474] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.543116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.543408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.543422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.890 [2024-06-10 12:09:04.543431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.890 [2024-06-10 12:09:04.543620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.890 [2024-06-10 12:09:04.543755] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.890 [2024-06-10 12:09:04.543763] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.890 [2024-06-10 12:09:04.543770] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.890 [2024-06-10 12:09:04.546278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.890 [2024-06-10 12:09:04.555023] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.890 [2024-06-10 12:09:04.555576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.890 [2024-06-10 12:09:04.555921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.555930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.891 [2024-06-10 12:09:04.555938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.891 [2024-06-10 12:09:04.556106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.891 [2024-06-10 12:09:04.556275] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.891 [2024-06-10 12:09:04.556284] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.891 [2024-06-10 12:09:04.556291] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.891 [2024-06-10 12:09:04.558712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.891 [2024-06-10 12:09:04.567415] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.891 [2024-06-10 12:09:04.567951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.568296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.568307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.891 [2024-06-10 12:09:04.568314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.891 [2024-06-10 12:09:04.568467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.891 [2024-06-10 12:09:04.568616] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.891 [2024-06-10 12:09:04.568624] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.891 [2024-06-10 12:09:04.568631] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.891 [2024-06-10 12:09:04.571176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.891 [2024-06-10 12:09:04.579957] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.891 [2024-06-10 12:09:04.580515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.580808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.580821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.891 [2024-06-10 12:09:04.580830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.891 [2024-06-10 12:09:04.581018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.891 [2024-06-10 12:09:04.581211] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.891 [2024-06-10 12:09:04.581219] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.891 [2024-06-10 12:09:04.581226] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.891 [2024-06-10 12:09:04.583647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.891 [2024-06-10 12:09:04.592524] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.891 [2024-06-10 12:09:04.593164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.593550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.593564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.891 [2024-06-10 12:09:04.593573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.891 [2024-06-10 12:09:04.593804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.891 [2024-06-10 12:09:04.593956] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.891 [2024-06-10 12:09:04.593965] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.891 [2024-06-10 12:09:04.593972] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.891 [2024-06-10 12:09:04.596344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.891 [2024-06-10 12:09:04.605306] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.891 [2024-06-10 12:09:04.605820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.606062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.606072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.891 [2024-06-10 12:09:04.606079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.891 [2024-06-10 12:09:04.606261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.891 [2024-06-10 12:09:04.606411] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.891 [2024-06-10 12:09:04.606419] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.891 [2024-06-10 12:09:04.606425] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.891 [2024-06-10 12:09:04.608579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.891 [2024-06-10 12:09:04.618044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.891 [2024-06-10 12:09:04.618548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.618933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.618946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.891 [2024-06-10 12:09:04.618955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.891 [2024-06-10 12:09:04.619143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.891 [2024-06-10 12:09:04.619281] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.891 [2024-06-10 12:09:04.619290] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.891 [2024-06-10 12:09:04.619297] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.891 [2024-06-10 12:09:04.621613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.891 [2024-06-10 12:09:04.630528] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.891 [2024-06-10 12:09:04.631147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.631527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.631541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.891 [2024-06-10 12:09:04.631550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.891 [2024-06-10 12:09:04.631698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.891 [2024-06-10 12:09:04.631808] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.891 [2024-06-10 12:09:04.631815] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.891 [2024-06-10 12:09:04.631822] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.891 [2024-06-10 12:09:04.634104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.891 [2024-06-10 12:09:04.643026] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.891 [2024-06-10 12:09:04.643485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.643836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.891 [2024-06-10 12:09:04.643846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:10.891 [2024-06-10 12:09:04.643854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:10.891 [2024-06-10 12:09:04.644019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:10.891 [2024-06-10 12:09:04.644212] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.891 [2024-06-10 12:09:04.644219] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.891 [2024-06-10 12:09:04.644226] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.891 [2024-06-10 12:09:04.646608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.154 [2024-06-10 12:09:04.655732] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.154 [2024-06-10 12:09:04.656319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.656699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.656711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.154 [2024-06-10 12:09:04.656721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.154 [2024-06-10 12:09:04.656932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.154 [2024-06-10 12:09:04.657104] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.154 [2024-06-10 12:09:04.657113] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.154 [2024-06-10 12:09:04.657120] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.154 [2024-06-10 12:09:04.659292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.154 [2024-06-10 12:09:04.668353] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.154 [2024-06-10 12:09:04.668941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.669315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.669329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.154 [2024-06-10 12:09:04.669339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.154 [2024-06-10 12:09:04.669545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.154 [2024-06-10 12:09:04.669717] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.154 [2024-06-10 12:09:04.669725] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.154 [2024-06-10 12:09:04.669733] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.154 [2024-06-10 12:09:04.672114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.154 [2024-06-10 12:09:04.680895] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.154 [2024-06-10 12:09:04.681605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.681979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.681996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.154 [2024-06-10 12:09:04.682006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.154 [2024-06-10 12:09:04.682138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.154 [2024-06-10 12:09:04.682308] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.154 [2024-06-10 12:09:04.682317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.154 [2024-06-10 12:09:04.682324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.154 [2024-06-10 12:09:04.684696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.154 [2024-06-10 12:09:04.693612] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.154 [2024-06-10 12:09:04.694161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.694551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.694565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.154 [2024-06-10 12:09:04.694574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.154 [2024-06-10 12:09:04.694719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.154 [2024-06-10 12:09:04.694912] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.154 [2024-06-10 12:09:04.694920] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.154 [2024-06-10 12:09:04.694927] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.154 [2024-06-10 12:09:04.697193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.154 [2024-06-10 12:09:04.706224] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.154 [2024-06-10 12:09:04.706696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.707093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.707106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.154 [2024-06-10 12:09:04.707116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.154 [2024-06-10 12:09:04.707309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.154 [2024-06-10 12:09:04.707457] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.154 [2024-06-10 12:09:04.707465] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.154 [2024-06-10 12:09:04.707472] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.154 [2024-06-10 12:09:04.709922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.154 [2024-06-10 12:09:04.719023] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.154 [2024-06-10 12:09:04.719629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.720008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.720021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.154 [2024-06-10 12:09:04.720034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.154 [2024-06-10 12:09:04.720184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.154 [2024-06-10 12:09:04.720353] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.154 [2024-06-10 12:09:04.720362] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.154 [2024-06-10 12:09:04.720369] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.154 [2024-06-10 12:09:04.722777] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.154 [2024-06-10 12:09:04.731478] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.154 [2024-06-10 12:09:04.732026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.732394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.732408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.154 [2024-06-10 12:09:04.732417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.154 [2024-06-10 12:09:04.732589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.154 [2024-06-10 12:09:04.732742] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.154 [2024-06-10 12:09:04.732750] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.154 [2024-06-10 12:09:04.732757] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.154 [2024-06-10 12:09:04.735172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.154 [2024-06-10 12:09:04.744188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.154 [2024-06-10 12:09:04.744727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.745098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.745111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.154 [2024-06-10 12:09:04.745120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.154 [2024-06-10 12:09:04.745258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.154 [2024-06-10 12:09:04.745474] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.154 [2024-06-10 12:09:04.745482] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.154 [2024-06-10 12:09:04.745489] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.154 [2024-06-10 12:09:04.747923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.154 [2024-06-10 12:09:04.756823] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.154 [2024-06-10 12:09:04.757418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.757794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.154 [2024-06-10 12:09:04.757806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.154 [2024-06-10 12:09:04.757816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.154 [2024-06-10 12:09:04.757945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.154 [2024-06-10 12:09:04.758061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.758069] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.758076] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.760372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.769428] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.770017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.770302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.770318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.155 [2024-06-10 12:09:04.770327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.155 [2024-06-10 12:09:04.770477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.155 [2024-06-10 12:09:04.770649] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.770657] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.770664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.772849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.782141] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.782801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.783175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.783187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.155 [2024-06-10 12:09:04.783197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.155 [2024-06-10 12:09:04.783382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.155 [2024-06-10 12:09:04.783512] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.783520] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.783527] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.786073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.794762] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.795446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.795798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.795811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.155 [2024-06-10 12:09:04.795820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.155 [2024-06-10 12:09:04.795965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.155 [2024-06-10 12:09:04.796118] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.796127] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.796134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.798536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.807228] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.807859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.808234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.808255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.155 [2024-06-10 12:09:04.808264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.155 [2024-06-10 12:09:04.808452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.155 [2024-06-10 12:09:04.808586] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.808594] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.808601] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.810896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.819754] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.820234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.820633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.820646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.155 [2024-06-10 12:09:04.820655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.155 [2024-06-10 12:09:04.820786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.155 [2024-06-10 12:09:04.820918] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.820926] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.820933] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.823363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.832142] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.832690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.833079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.833092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.155 [2024-06-10 12:09:04.833101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.155 [2024-06-10 12:09:04.833261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.155 [2024-06-10 12:09:04.833418] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.833426] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.833437] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.835875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.844819] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.845508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.845740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.845755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.155 [2024-06-10 12:09:04.845764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.155 [2024-06-10 12:09:04.845912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.155 [2024-06-10 12:09:04.846089] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.846097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.846104] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.848481] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.857676] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.858198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.858487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.858501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.155 [2024-06-10 12:09:04.858510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.155 [2024-06-10 12:09:04.858660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.155 [2024-06-10 12:09:04.858789] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.858797] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.858804] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.861301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.870134] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.870731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.871106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.871119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.155 [2024-06-10 12:09:04.871128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.155 [2024-06-10 12:09:04.871268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.155 [2024-06-10 12:09:04.871422] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.155 [2024-06-10 12:09:04.871430] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.155 [2024-06-10 12:09:04.871437] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.155 [2024-06-10 12:09:04.873928] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.155 [2024-06-10 12:09:04.882892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.155 [2024-06-10 12:09:04.883486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.155 [2024-06-10 12:09:04.883862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.156 [2024-06-10 12:09:04.883874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.156 [2024-06-10 12:09:04.883884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.156 [2024-06-10 12:09:04.884087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.156 [2024-06-10 12:09:04.884218] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.156 [2024-06-10 12:09:04.884226] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.156 [2024-06-10 12:09:04.884234] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.156 [2024-06-10 12:09:04.886663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.156 [2024-06-10 12:09:04.895406] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.156 [2024-06-10 12:09:04.896022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.156 [2024-06-10 12:09:04.896409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.156 [2024-06-10 12:09:04.896424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.156 [2024-06-10 12:09:04.896433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.156 [2024-06-10 12:09:04.896559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.156 [2024-06-10 12:09:04.896705] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.156 [2024-06-10 12:09:04.896714] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.156 [2024-06-10 12:09:04.896721] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.156 [2024-06-10 12:09:04.899091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.156 [2024-06-10 12:09:04.908020] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.156 [2024-06-10 12:09:04.908649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.156 [2024-06-10 12:09:04.909025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.156 [2024-06-10 12:09:04.909038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.156 [2024-06-10 12:09:04.909047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.156 [2024-06-10 12:09:04.909194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.156 [2024-06-10 12:09:04.909315] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.156 [2024-06-10 12:09:04.909324] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.156 [2024-06-10 12:09:04.909331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.156 [2024-06-10 12:09:04.911785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.156 [2024-06-10 12:09:04.920581] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.156 [2024-06-10 12:09:04.921274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.156 [2024-06-10 12:09:04.921664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.156 [2024-06-10 12:09:04.921677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.156 [2024-06-10 12:09:04.921686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.156 [2024-06-10 12:09:04.921895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.156 [2024-06-10 12:09:04.922050] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.156 [2024-06-10 12:09:04.922058] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.156 [2024-06-10 12:09:04.922066] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.418 [2024-06-10 12:09:04.924437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.418 [2024-06-10 12:09:04.933022] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.418 [2024-06-10 12:09:04.933587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.933859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.933872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.418 [2024-06-10 12:09:04.933882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.418 [2024-06-10 12:09:04.934051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.418 [2024-06-10 12:09:04.934251] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.418 [2024-06-10 12:09:04.934261] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.418 [2024-06-10 12:09:04.934268] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.418 [2024-06-10 12:09:04.936558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.418 [2024-06-10 12:09:04.945675] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.418 [2024-06-10 12:09:04.946259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.946663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.946676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.418 [2024-06-10 12:09:04.946686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.418 [2024-06-10 12:09:04.946834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.418 [2024-06-10 12:09:04.946966] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.418 [2024-06-10 12:09:04.946974] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.418 [2024-06-10 12:09:04.946982] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.418 [2024-06-10 12:09:04.949261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.418 [2024-06-10 12:09:04.958206] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.418 [2024-06-10 12:09:04.958853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.959225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.959238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.418 [2024-06-10 12:09:04.959258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.418 [2024-06-10 12:09:04.959433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.418 [2024-06-10 12:09:04.959567] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.418 [2024-06-10 12:09:04.959575] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.418 [2024-06-10 12:09:04.959582] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.418 [2024-06-10 12:09:04.962111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.418 [2024-06-10 12:09:04.970760] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.418 [2024-06-10 12:09:04.971257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.971738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.971751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.418 [2024-06-10 12:09:04.971761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.418 [2024-06-10 12:09:04.971964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.418 [2024-06-10 12:09:04.972098] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.418 [2024-06-10 12:09:04.972106] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.418 [2024-06-10 12:09:04.972114] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.418 [2024-06-10 12:09:04.974399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.418 [2024-06-10 12:09:04.983268] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.418 [2024-06-10 12:09:04.983849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.984230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.984251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.418 [2024-06-10 12:09:04.984261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.418 [2024-06-10 12:09:04.984411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.418 [2024-06-10 12:09:04.984582] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.418 [2024-06-10 12:09:04.984590] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.418 [2024-06-10 12:09:04.984598] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.418 [2024-06-10 12:09:04.987136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.418 [2024-06-10 12:09:04.995997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.418 [2024-06-10 12:09:04.996572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.996945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:04.996961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.418 [2024-06-10 12:09:04.996970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.418 [2024-06-10 12:09:04.997139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.418 [2024-06-10 12:09:04.997325] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.418 [2024-06-10 12:09:04.997334] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.418 [2024-06-10 12:09:04.997341] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.418 [2024-06-10 12:09:04.999735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.418 [2024-06-10 12:09:05.008586] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.418 [2024-06-10 12:09:05.009217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:05.009581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:05.009594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.418 [2024-06-10 12:09:05.009603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.418 [2024-06-10 12:09:05.009775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.418 [2024-06-10 12:09:05.009927] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.418 [2024-06-10 12:09:05.009935] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.418 [2024-06-10 12:09:05.009943] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.418 [2024-06-10 12:09:05.012224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.418 [2024-06-10 12:09:05.021150] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.418 [2024-06-10 12:09:05.021761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:05.022207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.418 [2024-06-10 12:09:05.022219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.418 [2024-06-10 12:09:05.022228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.418 [2024-06-10 12:09:05.022388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.419 [2024-06-10 12:09:05.022574] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.419 [2024-06-10 12:09:05.022583] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.419 [2024-06-10 12:09:05.022590] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.419 [2024-06-10 12:09:05.024740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.419 [2024-06-10 12:09:05.033710] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.419 [2024-06-10 12:09:05.034204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.034416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.034430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.419 [2024-06-10 12:09:05.034441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.419 [2024-06-10 12:09:05.034616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.419 [2024-06-10 12:09:05.034729] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.419 [2024-06-10 12:09:05.034738] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.419 [2024-06-10 12:09:05.034744] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.419 [2024-06-10 12:09:05.037254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.419 [2024-06-10 12:09:05.046216] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.419 [2024-06-10 12:09:05.046753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.046973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.046984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.419 [2024-06-10 12:09:05.046991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.419 [2024-06-10 12:09:05.047144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.419 [2024-06-10 12:09:05.047318] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.419 [2024-06-10 12:09:05.047327] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.419 [2024-06-10 12:09:05.047334] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.419 [2024-06-10 12:09:05.049730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.419 [2024-06-10 12:09:05.058917] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.419 [2024-06-10 12:09:05.059556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.059922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.059934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.419 [2024-06-10 12:09:05.059944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.419 [2024-06-10 12:09:05.060155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.419 [2024-06-10 12:09:05.060295] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.419 [2024-06-10 12:09:05.060304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.419 [2024-06-10 12:09:05.060311] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.419 [2024-06-10 12:09:05.062819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.419 [2024-06-10 12:09:05.071669] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.419 [2024-06-10 12:09:05.072265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.072658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.072671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.419 [2024-06-10 12:09:05.072680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.419 [2024-06-10 12:09:05.072834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.419 [2024-06-10 12:09:05.072981] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.419 [2024-06-10 12:09:05.072989] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.419 [2024-06-10 12:09:05.072996] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.419 [2024-06-10 12:09:05.075147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.419 [2024-06-10 12:09:05.084179] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.419 [2024-06-10 12:09:05.084765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.085139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.085152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.419 [2024-06-10 12:09:05.085161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.419 [2024-06-10 12:09:05.085338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.419 [2024-06-10 12:09:05.085470] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.419 [2024-06-10 12:09:05.085478] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.419 [2024-06-10 12:09:05.085486] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.419 [2024-06-10 12:09:05.087837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.419 [2024-06-10 12:09:05.096782] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.419 [2024-06-10 12:09:05.097310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.097705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.097718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.419 [2024-06-10 12:09:05.097727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.419 [2024-06-10 12:09:05.097957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.419 [2024-06-10 12:09:05.098126] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.419 [2024-06-10 12:09:05.098134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.419 [2024-06-10 12:09:05.098141] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.419 [2024-06-10 12:09:05.100618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.419 [2024-06-10 12:09:05.109239] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.419 [2024-06-10 12:09:05.109715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.110067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.110077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.419 [2024-06-10 12:09:05.110084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.419 [2024-06-10 12:09:05.110237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.419 [2024-06-10 12:09:05.110399] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.419 [2024-06-10 12:09:05.110407] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.419 [2024-06-10 12:09:05.110414] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.419 [2024-06-10 12:09:05.112762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.419 [2024-06-10 12:09:05.121745] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.419 [2024-06-10 12:09:05.122311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.122681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.122690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.419 [2024-06-10 12:09:05.122698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.419 [2024-06-10 12:09:05.122828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.419 [2024-06-10 12:09:05.122935] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.419 [2024-06-10 12:09:05.122943] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.419 [2024-06-10 12:09:05.122949] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.419 [2024-06-10 12:09:05.125370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.419 [2024-06-10 12:09:05.134261] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.419 [2024-06-10 12:09:05.134881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.135263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.419 [2024-06-10 12:09:05.135277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.419 [2024-06-10 12:09:05.135286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.419 [2024-06-10 12:09:05.135434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.420 [2024-06-10 12:09:05.135568] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.420 [2024-06-10 12:09:05.135576] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.420 [2024-06-10 12:09:05.135583] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.420 [2024-06-10 12:09:05.137971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.420 [2024-06-10 12:09:05.146993] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.420 [2024-06-10 12:09:05.147491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.420 [2024-06-10 12:09:05.147848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.420 [2024-06-10 12:09:05.147858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.420 [2024-06-10 12:09:05.147866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.420 [2024-06-10 12:09:05.148016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.420 [2024-06-10 12:09:05.148147] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.420 [2024-06-10 12:09:05.148159] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.420 [2024-06-10 12:09:05.148166] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.420 [2024-06-10 12:09:05.150646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.420 [2024-06-10 12:09:05.159566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.420 [2024-06-10 12:09:05.160183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.420 [2024-06-10 12:09:05.160567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.420 [2024-06-10 12:09:05.160581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.420 [2024-06-10 12:09:05.160590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.420 [2024-06-10 12:09:05.160719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.420 [2024-06-10 12:09:05.160887] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.420 [2024-06-10 12:09:05.160895] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.420 [2024-06-10 12:09:05.160903] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.420 [2024-06-10 12:09:05.163305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.420 [2024-06-10 12:09:05.172350] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.420 [2024-06-10 12:09:05.172932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.420 [2024-06-10 12:09:05.173307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.420 [2024-06-10 12:09:05.173321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.420 [2024-06-10 12:09:05.173331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.420 [2024-06-10 12:09:05.173515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.420 [2024-06-10 12:09:05.173708] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.420 [2024-06-10 12:09:05.173717] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.420 [2024-06-10 12:09:05.173724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.420 [2024-06-10 12:09:05.175877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.420 [2024-06-10 12:09:05.185013] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.420 [2024-06-10 12:09:05.185563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.420 [2024-06-10 12:09:05.185938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.420 [2024-06-10 12:09:05.185951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.420 [2024-06-10 12:09:05.185960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.420 [2024-06-10 12:09:05.186088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.420 [2024-06-10 12:09:05.186276] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.420 [2024-06-10 12:09:05.186285] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.420 [2024-06-10 12:09:05.186300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.682 [2024-06-10 12:09:05.188602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.682 [2024-06-10 12:09:05.197506] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.682 [2024-06-10 12:09:05.198199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.198583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.198597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.682 [2024-06-10 12:09:05.198607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.682 [2024-06-10 12:09:05.198733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.682 [2024-06-10 12:09:05.198805] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.682 [2024-06-10 12:09:05.198812] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.682 [2024-06-10 12:09:05.198820] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.682 [2024-06-10 12:09:05.201373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.682 [2024-06-10 12:09:05.210384] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.682 [2024-06-10 12:09:05.210882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.211234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.211251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.682 [2024-06-10 12:09:05.211259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.682 [2024-06-10 12:09:05.211449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.682 [2024-06-10 12:09:05.211632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.682 [2024-06-10 12:09:05.211641] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.682 [2024-06-10 12:09:05.211648] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.682 [2024-06-10 12:09:05.213923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.682 [2024-06-10 12:09:05.223023] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.682 [2024-06-10 12:09:05.223533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.223913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.223923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.682 [2024-06-10 12:09:05.223930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.682 [2024-06-10 12:09:05.224123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.682 [2024-06-10 12:09:05.224282] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.682 [2024-06-10 12:09:05.224290] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.682 [2024-06-10 12:09:05.224296] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.682 [2024-06-10 12:09:05.226650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.682 [2024-06-10 12:09:05.235611] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.682 [2024-06-10 12:09:05.236239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.236643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.236656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.682 [2024-06-10 12:09:05.236665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.682 [2024-06-10 12:09:05.236815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.682 [2024-06-10 12:09:05.237030] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.682 [2024-06-10 12:09:05.237039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.682 [2024-06-10 12:09:05.237046] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.682 [2024-06-10 12:09:05.239380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.682 [2024-06-10 12:09:05.248193] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.682 [2024-06-10 12:09:05.248687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.249086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.249099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.682 [2024-06-10 12:09:05.249108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.682 [2024-06-10 12:09:05.249344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.682 [2024-06-10 12:09:05.249455] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.682 [2024-06-10 12:09:05.249463] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.682 [2024-06-10 12:09:05.249471] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.682 [2024-06-10 12:09:05.251885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.682 [2024-06-10 12:09:05.260774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.682 [2024-06-10 12:09:05.261282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.261656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-06-10 12:09:05.261669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.682 [2024-06-10 12:09:05.261678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.682 [2024-06-10 12:09:05.261869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.262056] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.262065] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.262072] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.264408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.683 [2024-06-10 12:09:05.273347] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.683 [2024-06-10 12:09:05.273961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.274335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.274349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.683 [2024-06-10 12:09:05.274358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.683 [2024-06-10 12:09:05.274527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.274618] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.274626] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.274633] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.277058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.683 [2024-06-10 12:09:05.285931] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.683 [2024-06-10 12:09:05.286566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.286881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.286895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.683 [2024-06-10 12:09:05.286904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.683 [2024-06-10 12:09:05.287014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.287126] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.287135] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.287143] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.289664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.683 [2024-06-10 12:09:05.298483] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.683 [2024-06-10 12:09:05.299115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.299390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.299405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.683 [2024-06-10 12:09:05.299414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.683 [2024-06-10 12:09:05.299540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.299731] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.299739] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.299746] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.302095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.683 [2024-06-10 12:09:05.311135] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.683 [2024-06-10 12:09:05.311597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.311965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.311975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.683 [2024-06-10 12:09:05.311982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.683 [2024-06-10 12:09:05.312170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.312306] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.312314] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.312321] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.314590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.683 [2024-06-10 12:09:05.323577] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.683 [2024-06-10 12:09:05.324074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.324422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.324433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.683 [2024-06-10 12:09:05.324440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.683 [2024-06-10 12:09:05.324630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.324760] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.324768] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.324775] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.327044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.683 [2024-06-10 12:09:05.336345] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.683 [2024-06-10 12:09:05.336931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.337302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.337316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.683 [2024-06-10 12:09:05.337325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.683 [2024-06-10 12:09:05.337494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.337607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.337615] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.337622] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.340047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.683 [2024-06-10 12:09:05.348959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.683 [2024-06-10 12:09:05.349587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.350031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.350049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.683 [2024-06-10 12:09:05.350059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.683 [2024-06-10 12:09:05.350185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.350328] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.350337] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.350344] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.352764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.683 [2024-06-10 12:09:05.361505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.683 [2024-06-10 12:09:05.362056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.362436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.362450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.683 [2024-06-10 12:09:05.362459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.683 [2024-06-10 12:09:05.362628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.362740] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.362748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.362756] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.365253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.683 [2024-06-10 12:09:05.374210] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.683 [2024-06-10 12:09:05.374767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.375115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-06-10 12:09:05.375125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.683 [2024-06-10 12:09:05.375132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.683 [2024-06-10 12:09:05.375290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.683 [2024-06-10 12:09:05.375379] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.683 [2024-06-10 12:09:05.375387] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.683 [2024-06-10 12:09:05.375394] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.683 [2024-06-10 12:09:05.377769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.684 [2024-06-10 12:09:05.386664] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.684 [2024-06-10 12:09:05.387190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.387539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.387550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.684 [2024-06-10 12:09:05.387562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.684 [2024-06-10 12:09:05.387694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.684 [2024-06-10 12:09:05.387843] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.684 [2024-06-10 12:09:05.387851] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.684 [2024-06-10 12:09:05.387858] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.684 [2024-06-10 12:09:05.390112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.684 [2024-06-10 12:09:05.399294] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.684 [2024-06-10 12:09:05.399842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.400136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.400145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.684 [2024-06-10 12:09:05.400153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.684 [2024-06-10 12:09:05.400309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.684 [2024-06-10 12:09:05.400459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.684 [2024-06-10 12:09:05.400466] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.684 [2024-06-10 12:09:05.400473] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.684 [2024-06-10 12:09:05.402727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.684 [2024-06-10 12:09:05.411807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.684 [2024-06-10 12:09:05.412475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.412842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.412855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.684 [2024-06-10 12:09:05.412864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.684 [2024-06-10 12:09:05.413033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.684 [2024-06-10 12:09:05.413226] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.684 [2024-06-10 12:09:05.413234] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.684 [2024-06-10 12:09:05.413241] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.684 [2024-06-10 12:09:05.415807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.684 [2024-06-10 12:09:05.424163] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.684 [2024-06-10 12:09:05.424526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.424883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.424892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.684 [2024-06-10 12:09:05.424900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.684 [2024-06-10 12:09:05.425040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.684 [2024-06-10 12:09:05.425171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.684 [2024-06-10 12:09:05.425179] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.684 [2024-06-10 12:09:05.425186] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.684 [2024-06-10 12:09:05.427606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.684 [2024-06-10 12:09:05.436709] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.684 [2024-06-10 12:09:05.437075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.437420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.437431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.684 [2024-06-10 12:09:05.437438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.684 [2024-06-10 12:09:05.437551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.684 [2024-06-10 12:09:05.437677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.684 [2024-06-10 12:09:05.437684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.684 [2024-06-10 12:09:05.437691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.684 [2024-06-10 12:09:05.440325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.684 [2024-06-10 12:09:05.449314] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.684 [2024-06-10 12:09:05.449813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.450165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-06-10 12:09:05.450175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.684 [2024-06-10 12:09:05.450182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.684 [2024-06-10 12:09:05.450364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.684 [2024-06-10 12:09:05.450577] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.684 [2024-06-10 12:09:05.450585] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.684 [2024-06-10 12:09:05.450592] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.947 [2024-06-10 12:09:05.452996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.947 [2024-06-10 12:09:05.461897] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.947 [2024-06-10 12:09:05.462487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.462744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.462757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.947 [2024-06-10 12:09:05.462766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.947 [2024-06-10 12:09:05.462935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.947 [2024-06-10 12:09:05.463148] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.947 [2024-06-10 12:09:05.463157] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.947 [2024-06-10 12:09:05.463164] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.947 [2024-06-10 12:09:05.465185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.947 [2024-06-10 12:09:05.474465] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.947 [2024-06-10 12:09:05.474961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.475291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.475302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.947 [2024-06-10 12:09:05.475310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.947 [2024-06-10 12:09:05.475463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.947 [2024-06-10 12:09:05.475613] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.947 [2024-06-10 12:09:05.475621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.947 [2024-06-10 12:09:05.475627] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.947 [2024-06-10 12:09:05.478030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.947 [2024-06-10 12:09:05.487460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.947 [2024-06-10 12:09:05.487995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.488221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.488232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.947 [2024-06-10 12:09:05.488241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.947 [2024-06-10 12:09:05.488406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.947 [2024-06-10 12:09:05.488532] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.947 [2024-06-10 12:09:05.488540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.947 [2024-06-10 12:09:05.488547] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.947 [2024-06-10 12:09:05.490873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.947 [2024-06-10 12:09:05.500117] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.947 [2024-06-10 12:09:05.500911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.501277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.501288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.947 [2024-06-10 12:09:05.501296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.947 [2024-06-10 12:09:05.501457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.947 [2024-06-10 12:09:05.501633] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.947 [2024-06-10 12:09:05.501641] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.947 [2024-06-10 12:09:05.501651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.947 [2024-06-10 12:09:05.504159] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.947 [2024-06-10 12:09:05.512637] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.947 [2024-06-10 12:09:05.513141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.513469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.513479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.947 [2024-06-10 12:09:05.513487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.947 [2024-06-10 12:09:05.513680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.947 [2024-06-10 12:09:05.513827] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.947 [2024-06-10 12:09:05.513834] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.947 [2024-06-10 12:09:05.513841] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.947 [2024-06-10 12:09:05.516349] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.947 [2024-06-10 12:09:05.525360] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.947 [2024-06-10 12:09:05.525893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.526348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.526358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.947 [2024-06-10 12:09:05.526365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.947 [2024-06-10 12:09:05.526549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.947 [2024-06-10 12:09:05.526695] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.947 [2024-06-10 12:09:05.526703] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.947 [2024-06-10 12:09:05.526710] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.947 [2024-06-10 12:09:05.528836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.947 [2024-06-10 12:09:05.538093] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.947 [2024-06-10 12:09:05.538375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.538739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.538748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.947 [2024-06-10 12:09:05.538756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.947 [2024-06-10 12:09:05.538866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.947 [2024-06-10 12:09:05.539036] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.947 [2024-06-10 12:09:05.539045] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.947 [2024-06-10 12:09:05.539052] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.947 [2024-06-10 12:09:05.541494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.947 [2024-06-10 12:09:05.550714] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.947 [2024-06-10 12:09:05.551269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.551728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.947 [2024-06-10 12:09:05.551737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.947 [2024-06-10 12:09:05.551745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.947 [2024-06-10 12:09:05.551854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.552046] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.552054] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.552060] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.554317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.563378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.948 [2024-06-10 12:09:05.564018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.564423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.564438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.948 [2024-06-10 12:09:05.564447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.948 [2024-06-10 12:09:05.564601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.564794] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.564802] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.564809] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.567222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.575780] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.948 [2024-06-10 12:09:05.576319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.576687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.576697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.948 [2024-06-10 12:09:05.576704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.948 [2024-06-10 12:09:05.576838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.576945] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.576953] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.576960] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.579327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.588460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.948 [2024-06-10 12:09:05.588998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.589229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.589250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.948 [2024-06-10 12:09:05.589260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.948 [2024-06-10 12:09:05.589389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.589561] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.589569] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.589576] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.591973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.601003] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.948 [2024-06-10 12:09:05.601521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.601875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.601885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.948 [2024-06-10 12:09:05.601892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.948 [2024-06-10 12:09:05.602079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.602213] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.602221] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.602227] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.604507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.613538] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.948 [2024-06-10 12:09:05.613980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.614337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.614347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.948 [2024-06-10 12:09:05.614355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.948 [2024-06-10 12:09:05.614483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.614614] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.614621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.614628] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.617097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.626377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.948 [2024-06-10 12:09:05.626901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.627253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.627263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.948 [2024-06-10 12:09:05.627271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.948 [2024-06-10 12:09:05.627461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.627594] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.627602] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.627608] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.629989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.638929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.948 [2024-06-10 12:09:05.639511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.639887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.639899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.948 [2024-06-10 12:09:05.639909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.948 [2024-06-10 12:09:05.640099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.640301] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.640310] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.640317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.642630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.651383] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.948 [2024-06-10 12:09:05.651965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.652450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.652487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.948 [2024-06-10 12:09:05.652497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.948 [2024-06-10 12:09:05.652709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.652819] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.652828] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.652836] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.655200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.663984] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.948 [2024-06-10 12:09:05.664481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.664836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.948 [2024-06-10 12:09:05.664846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.948 [2024-06-10 12:09:05.664854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.948 [2024-06-10 12:09:05.665026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.948 [2024-06-10 12:09:05.665170] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.948 [2024-06-10 12:09:05.665178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.948 [2024-06-10 12:09:05.665185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.948 [2024-06-10 12:09:05.667522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.948 [2024-06-10 12:09:05.676577] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.949 [2024-06-10 12:09:05.677093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.949 [2024-06-10 12:09:05.677456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.949 [2024-06-10 12:09:05.677467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.949 [2024-06-10 12:09:05.677475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.949 [2024-06-10 12:09:05.677606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.949 [2024-06-10 12:09:05.677734] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.949 [2024-06-10 12:09:05.677742] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.949 [2024-06-10 12:09:05.677748] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.949 [2024-06-10 12:09:05.680034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.949 [2024-06-10 12:09:05.689064] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.949 [2024-06-10 12:09:05.689487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.949 [2024-06-10 12:09:05.689835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.949 [2024-06-10 12:09:05.689845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.949 [2024-06-10 12:09:05.689852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.949 [2024-06-10 12:09:05.690001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.949 [2024-06-10 12:09:05.690092] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.949 [2024-06-10 12:09:05.690100] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.949 [2024-06-10 12:09:05.690107] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.949 [2024-06-10 12:09:05.692555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.949 [2024-06-10 12:09:05.701690] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.949 [2024-06-10 12:09:05.702234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.949 [2024-06-10 12:09:05.702595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.949 [2024-06-10 12:09:05.702605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.949 [2024-06-10 12:09:05.702616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.949 [2024-06-10 12:09:05.702726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.949 [2024-06-10 12:09:05.702816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.949 [2024-06-10 12:09:05.702823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.949 [2024-06-10 12:09:05.702830] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.949 [2024-06-10 12:09:05.705140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.949 [2024-06-10 12:09:05.714266] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.949 [2024-06-10 12:09:05.714758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.949 [2024-06-10 12:09:05.715134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.949 [2024-06-10 12:09:05.715147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:11.949 [2024-06-10 12:09:05.715156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:11.949 [2024-06-10 12:09:05.715350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:11.949 [2024-06-10 12:09:05.715467] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.949 [2024-06-10 12:09:05.715475] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.949 [2024-06-10 12:09:05.715482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.211 [2024-06-10 12:09:05.717771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.211 [2024-06-10 12:09:05.726921] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.211 [2024-06-10 12:09:05.727465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.211 [2024-06-10 12:09:05.727818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.211 [2024-06-10 12:09:05.727828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.211 [2024-06-10 12:09:05.727836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.211 [2024-06-10 12:09:05.728001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.211 [2024-06-10 12:09:05.728148] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.211 [2024-06-10 12:09:05.728156] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.211 [2024-06-10 12:09:05.728163] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.211 [2024-06-10 12:09:05.730490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.211 [2024-06-10 12:09:05.739385] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.211 [2024-06-10 12:09:05.739881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.211 [2024-06-10 12:09:05.740218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.211 [2024-06-10 12:09:05.740227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.211 [2024-06-10 12:09:05.740235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.211 [2024-06-10 12:09:05.740415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.211 [2024-06-10 12:09:05.740565] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.211 [2024-06-10 12:09:05.740573] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.211 [2024-06-10 12:09:05.740580] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.211 [2024-06-10 12:09:05.742979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.211 [2024-06-10 12:09:05.751990] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.211 [2024-06-10 12:09:05.752579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.211 [2024-06-10 12:09:05.752958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.211 [2024-06-10 12:09:05.752971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.211 [2024-06-10 12:09:05.752980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.211 [2024-06-10 12:09:05.753149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.211 [2024-06-10 12:09:05.753334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.211 [2024-06-10 12:09:05.753343] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.211 [2024-06-10 12:09:05.753350] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.211 [2024-06-10 12:09:05.755847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.211 [2024-06-10 12:09:05.764531] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.211 [2024-06-10 12:09:05.765027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.211 [2024-06-10 12:09:05.765467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.211 [2024-06-10 12:09:05.765504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.211 [2024-06-10 12:09:05.765514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.211 [2024-06-10 12:09:05.765684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.212 [2024-06-10 12:09:05.765794] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.212 [2024-06-10 12:09:05.765802] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.212 [2024-06-10 12:09:05.765810] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.212 [2024-06-10 12:09:05.768178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.212 [2024-06-10 12:09:05.776843] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.212 [2024-06-10 12:09:05.777339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.777739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.777749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.212 [2024-06-10 12:09:05.777757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.212 [2024-06-10 12:09:05.777929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.212 [2024-06-10 12:09:05.778105] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.212 [2024-06-10 12:09:05.778113] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.212 [2024-06-10 12:09:05.778120] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.212 [2024-06-10 12:09:05.780545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.212 [2024-06-10 12:09:05.789631] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.212 [2024-06-10 12:09:05.790169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.790530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.790540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.212 [2024-06-10 12:09:05.790547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.212 [2024-06-10 12:09:05.790654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.212 [2024-06-10 12:09:05.790816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.212 [2024-06-10 12:09:05.790824] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.212 [2024-06-10 12:09:05.790830] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.212 [2024-06-10 12:09:05.793111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.212 [2024-06-10 12:09:05.802640] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.212 [2024-06-10 12:09:05.803195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.803564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.803574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.212 [2024-06-10 12:09:05.803581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.212 [2024-06-10 12:09:05.803765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.212 [2024-06-10 12:09:05.803895] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.212 [2024-06-10 12:09:05.803903] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.212 [2024-06-10 12:09:05.803909] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.212 [2024-06-10 12:09:05.806462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.212 [2024-06-10 12:09:05.815225] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.212 [2024-06-10 12:09:05.815720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.816068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.816077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.212 [2024-06-10 12:09:05.816084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.212 [2024-06-10 12:09:05.816234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.212 [2024-06-10 12:09:05.816371] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.212 [2024-06-10 12:09:05.816382] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.212 [2024-06-10 12:09:05.816389] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.212 [2024-06-10 12:09:05.818733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.212 [2024-06-10 12:09:05.827551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.212 [2024-06-10 12:09:05.828095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.828439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.828449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.212 [2024-06-10 12:09:05.828457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.212 [2024-06-10 12:09:05.828606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.212 [2024-06-10 12:09:05.828737] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.212 [2024-06-10 12:09:05.828744] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.212 [2024-06-10 12:09:05.828751] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.212 [2024-06-10 12:09:05.831130] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.212 [2024-06-10 12:09:05.840274] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.212 [2024-06-10 12:09:05.840741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.841078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.841091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.212 [2024-06-10 12:09:05.841100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.212 [2024-06-10 12:09:05.841232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.212 [2024-06-10 12:09:05.841410] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.212 [2024-06-10 12:09:05.841419] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.212 [2024-06-10 12:09:05.841426] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.212 [2024-06-10 12:09:05.843784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.212 [2024-06-10 12:09:05.852954] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.212 [2024-06-10 12:09:05.853544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.853921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.853934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.212 [2024-06-10 12:09:05.853943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.212 [2024-06-10 12:09:05.854112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.212 [2024-06-10 12:09:05.854315] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.212 [2024-06-10 12:09:05.854324] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.212 [2024-06-10 12:09:05.854335] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.212 [2024-06-10 12:09:05.856810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.212 [2024-06-10 12:09:05.865541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.212 [2024-06-10 12:09:05.866119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.866347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.866359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.212 [2024-06-10 12:09:05.866367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.212 [2024-06-10 12:09:05.866542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.212 [2024-06-10 12:09:05.866687] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.212 [2024-06-10 12:09:05.866696] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.212 [2024-06-10 12:09:05.866702] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.212 [2024-06-10 12:09:05.869041] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.212 [2024-06-10 12:09:05.878522] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.212 [2024-06-10 12:09:05.879105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.879372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.212 [2024-06-10 12:09:05.879387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.212 [2024-06-10 12:09:05.879397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.213 [2024-06-10 12:09:05.879522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.213 [2024-06-10 12:09:05.879694] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.213 [2024-06-10 12:09:05.879703] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.213 [2024-06-10 12:09:05.879710] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.213 [2024-06-10 12:09:05.882162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.213 [2024-06-10 12:09:05.891351] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.213 [2024-06-10 12:09:05.891861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.892233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.892254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.213 [2024-06-10 12:09:05.892264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.213 [2024-06-10 12:09:05.892451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.213 [2024-06-10 12:09:05.892543] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.213 [2024-06-10 12:09:05.892551] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.213 [2024-06-10 12:09:05.892558] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.213 [2024-06-10 12:09:05.895231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.213 [2024-06-10 12:09:05.904054] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.213 [2024-06-10 12:09:05.904706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.905074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.905087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.213 [2024-06-10 12:09:05.905096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.213 [2024-06-10 12:09:05.905249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.213 [2024-06-10 12:09:05.905425] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.213 [2024-06-10 12:09:05.905433] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.213 [2024-06-10 12:09:05.905440] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.213 [2024-06-10 12:09:05.907743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.213 [2024-06-10 12:09:05.916899] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.213 [2024-06-10 12:09:05.917531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.917905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.917918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.213 [2024-06-10 12:09:05.917927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.213 [2024-06-10 12:09:05.918096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.213 [2024-06-10 12:09:05.918278] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.213 [2024-06-10 12:09:05.918287] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.213 [2024-06-10 12:09:05.918294] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.213 [2024-06-10 12:09:05.920636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.213 [2024-06-10 12:09:05.929440] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.213 [2024-06-10 12:09:05.929941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.930290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.930301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.213 [2024-06-10 12:09:05.930308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.213 [2024-06-10 12:09:05.930418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.213 [2024-06-10 12:09:05.930565] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.213 [2024-06-10 12:09:05.930573] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.213 [2024-06-10 12:09:05.930580] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.213 [2024-06-10 12:09:05.932846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.213 [2024-06-10 12:09:05.942370] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.213 [2024-06-10 12:09:05.942702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.942981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.942992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.213 [2024-06-10 12:09:05.942999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.213 [2024-06-10 12:09:05.943193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.213 [2024-06-10 12:09:05.943367] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.213 [2024-06-10 12:09:05.943376] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.213 [2024-06-10 12:09:05.943382] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.213 [2024-06-10 12:09:05.945508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.213 [2024-06-10 12:09:05.955170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.213 [2024-06-10 12:09:05.955722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.956003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.956013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.213 [2024-06-10 12:09:05.956020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.213 [2024-06-10 12:09:05.956185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.213 [2024-06-10 12:09:05.956337] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.213 [2024-06-10 12:09:05.956346] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.213 [2024-06-10 12:09:05.956354] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.213 [2024-06-10 12:09:05.958642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.213 [2024-06-10 12:09:05.967856] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.213 [2024-06-10 12:09:05.968424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.968781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.968790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.213 [2024-06-10 12:09:05.968798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.213 [2024-06-10 12:09:05.968909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.213 [2024-06-10 12:09:05.969117] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.213 [2024-06-10 12:09:05.969125] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.213 [2024-06-10 12:09:05.969132] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.213 [2024-06-10 12:09:05.971515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.213 [2024-06-10 12:09:05.980248] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.213 [2024-06-10 12:09:05.980742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.981095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.213 [2024-06-10 12:09:05.981105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.213 [2024-06-10 12:09:05.981112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.213 [2024-06-10 12:09:05.981288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.213 [2024-06-10 12:09:05.981398] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.213 [2024-06-10 12:09:05.981405] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.213 [2024-06-10 12:09:05.981411] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.474 [2024-06-10 12:09:05.983854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.474 [2024-06-10 12:09:05.992851] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.474 [2024-06-10 12:09:05.993463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.474 [2024-06-10 12:09:05.993694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.474 [2024-06-10 12:09:05.993709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.474 [2024-06-10 12:09:05.993718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.474 [2024-06-10 12:09:05.993869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.474 [2024-06-10 12:09:05.993989] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.474 [2024-06-10 12:09:05.993997] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.474 [2024-06-10 12:09:05.994005] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.474 [2024-06-10 12:09:05.996340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.005592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.006094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.006485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.006499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.475 [2024-06-10 12:09:06.006508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.475 [2024-06-10 12:09:06.006674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.475 [2024-06-10 12:09:06.006867] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.475 [2024-06-10 12:09:06.006875] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.475 [2024-06-10 12:09:06.006882] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.475 [2024-06-10 12:09:06.009200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.018137] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.018703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.019047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.019062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.475 [2024-06-10 12:09:06.019070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.475 [2024-06-10 12:09:06.019201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.475 [2024-06-10 12:09:06.019337] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.475 [2024-06-10 12:09:06.019345] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.475 [2024-06-10 12:09:06.019352] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.475 [2024-06-10 12:09:06.021737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.030788] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.031317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.031675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.031684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.475 [2024-06-10 12:09:06.031692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.475 [2024-06-10 12:09:06.031823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.475 [2024-06-10 12:09:06.031966] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.475 [2024-06-10 12:09:06.031974] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.475 [2024-06-10 12:09:06.031981] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.475 [2024-06-10 12:09:06.034495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.043425] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.044028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.044316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.044331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.475 [2024-06-10 12:09:06.044341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.475 [2024-06-10 12:09:06.044510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.475 [2024-06-10 12:09:06.044661] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.475 [2024-06-10 12:09:06.044669] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.475 [2024-06-10 12:09:06.044676] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.475 [2024-06-10 12:09:06.047001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.055988] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.056623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.056997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.057010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.475 [2024-06-10 12:09:06.057023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.475 [2024-06-10 12:09:06.057173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.475 [2024-06-10 12:09:06.057315] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.475 [2024-06-10 12:09:06.057325] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.475 [2024-06-10 12:09:06.057332] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.475 [2024-06-10 12:09:06.059947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.068560] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.069101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.069458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.069469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.475 [2024-06-10 12:09:06.069476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.475 [2024-06-10 12:09:06.069651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.475 [2024-06-10 12:09:06.069864] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.475 [2024-06-10 12:09:06.069873] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.475 [2024-06-10 12:09:06.069879] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.475 [2024-06-10 12:09:06.072095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.081227] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.081739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.081965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.081977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.475 [2024-06-10 12:09:06.081985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.475 [2024-06-10 12:09:06.082116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.475 [2024-06-10 12:09:06.082271] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.475 [2024-06-10 12:09:06.082280] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.475 [2024-06-10 12:09:06.082286] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.475 [2024-06-10 12:09:06.084555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.094017] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.094540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.094890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.094900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.475 [2024-06-10 12:09:06.094907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.475 [2024-06-10 12:09:06.095023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.475 [2024-06-10 12:09:06.095194] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.475 [2024-06-10 12:09:06.095202] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.475 [2024-06-10 12:09:06.095208] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.475 [2024-06-10 12:09:06.097657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.106633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.107161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.107520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.107531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.475 [2024-06-10 12:09:06.107538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.475 [2024-06-10 12:09:06.107648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.475 [2024-06-10 12:09:06.107813] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.475 [2024-06-10 12:09:06.107820] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.475 [2024-06-10 12:09:06.107828] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.475 [2024-06-10 12:09:06.110141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.475 [2024-06-10 12:09:06.119217] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.475 [2024-06-10 12:09:06.119712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.119947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.475 [2024-06-10 12:09:06.119957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.476 [2024-06-10 12:09:06.119964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.476 [2024-06-10 12:09:06.120132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.476 [2024-06-10 12:09:06.120281] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.476 [2024-06-10 12:09:06.120289] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.476 [2024-06-10 12:09:06.120296] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.476 [2024-06-10 12:09:06.122616] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.476 [2024-06-10 12:09:06.131988] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.476 [2024-06-10 12:09:06.132614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.133042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.133051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.476 [2024-06-10 12:09:06.133058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.476 [2024-06-10 12:09:06.133205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.476 [2024-06-10 12:09:06.133344] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.476 [2024-06-10 12:09:06.133352] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.476 [2024-06-10 12:09:06.133359] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.476 [2024-06-10 12:09:06.135862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.476 [2024-06-10 12:09:06.144645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.476 [2024-06-10 12:09:06.145193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.145469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.145479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.476 [2024-06-10 12:09:06.145486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.476 [2024-06-10 12:09:06.145635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.476 [2024-06-10 12:09:06.145786] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.476 [2024-06-10 12:09:06.145793] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.476 [2024-06-10 12:09:06.145800] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.476 [2024-06-10 12:09:06.148299] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.476 [2024-06-10 12:09:06.157379] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.476 [2024-06-10 12:09:06.157817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.158206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.158216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.476 [2024-06-10 12:09:06.158224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.476 [2024-06-10 12:09:06.158355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.476 [2024-06-10 12:09:06.158484] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.476 [2024-06-10 12:09:06.158491] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.476 [2024-06-10 12:09:06.158498] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.476 [2024-06-10 12:09:06.160967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.476 [2024-06-10 12:09:06.170184] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.476 [2024-06-10 12:09:06.170703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.171056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.171065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.476 [2024-06-10 12:09:06.171072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.476 [2024-06-10 12:09:06.171218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.476 [2024-06-10 12:09:06.171372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.476 [2024-06-10 12:09:06.171384] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.476 [2024-06-10 12:09:06.171391] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.476 [2024-06-10 12:09:06.173931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.476 [2024-06-10 12:09:06.182779] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.476 [2024-06-10 12:09:06.183384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.183758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.183771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.476 [2024-06-10 12:09:06.183780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.476 [2024-06-10 12:09:06.183930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.476 [2024-06-10 12:09:06.184042] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.476 [2024-06-10 12:09:06.184051] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.476 [2024-06-10 12:09:06.184058] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.476 [2024-06-10 12:09:06.186392] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.476 [2024-06-10 12:09:06.195292] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.476 [2024-06-10 12:09:06.195917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.196298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.196312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.476 [2024-06-10 12:09:06.196321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.476 [2024-06-10 12:09:06.196469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.476 [2024-06-10 12:09:06.196600] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.476 [2024-06-10 12:09:06.196608] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.476 [2024-06-10 12:09:06.196615] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.476 [2024-06-10 12:09:06.198923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.476 [2024-06-10 12:09:06.207783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.476 [2024-06-10 12:09:06.208477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.208853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.208866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.476 [2024-06-10 12:09:06.208875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.476 [2024-06-10 12:09:06.209044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.476 [2024-06-10 12:09:06.209178] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.476 [2024-06-10 12:09:06.209186] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.476 [2024-06-10 12:09:06.209198] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.476 [2024-06-10 12:09:06.211516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.476 [2024-06-10 12:09:06.220363] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.476 [2024-06-10 12:09:06.220980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.221355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.221369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.476 [2024-06-10 12:09:06.221378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.476 [2024-06-10 12:09:06.221526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.476 [2024-06-10 12:09:06.221675] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.476 [2024-06-10 12:09:06.221683] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.476 [2024-06-10 12:09:06.221690] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.476 [2024-06-10 12:09:06.223962] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.476 [2024-06-10 12:09:06.232954] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.476 [2024-06-10 12:09:06.233564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.233937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.476 [2024-06-10 12:09:06.233949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.477 [2024-06-10 12:09:06.233959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.477 [2024-06-10 12:09:06.234109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.477 [2024-06-10 12:09:06.234315] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.477 [2024-06-10 12:09:06.234324] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.477 [2024-06-10 12:09:06.234332] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.477 [2024-06-10 12:09:06.236635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.477 [2024-06-10 12:09:06.245558] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.738 [2024-06-10 12:09:06.246115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.738 [2024-06-10 12:09:06.246467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.738 [2024-06-10 12:09:06.246478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.738 [2024-06-10 12:09:06.246485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.738 [2024-06-10 12:09:06.246613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.738 [2024-06-10 12:09:06.246720] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.738 [2024-06-10 12:09:06.246728] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.738 [2024-06-10 12:09:06.246735] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.738 [2024-06-10 12:09:06.249263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.738 [2024-06-10 12:09:06.258381] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.738 [2024-06-10 12:09:06.258984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.738 [2024-06-10 12:09:06.259363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.738 [2024-06-10 12:09:06.259378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.738 [2024-06-10 12:09:06.259387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.738 [2024-06-10 12:09:06.259559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.738 [2024-06-10 12:09:06.259712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.738 [2024-06-10 12:09:06.259720] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.738 [2024-06-10 12:09:06.259728] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.738 [2024-06-10 12:09:06.262111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.738 [2024-06-10 12:09:06.270830] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.738 [2024-06-10 12:09:06.271325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.738 [2024-06-10 12:09:06.271538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.738 [2024-06-10 12:09:06.271551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.738 [2024-06-10 12:09:06.271559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.738 [2024-06-10 12:09:06.271695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.738 [2024-06-10 12:09:06.271805] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.738 [2024-06-10 12:09:06.271813] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.738 [2024-06-10 12:09:06.271820] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.738 [2024-06-10 12:09:06.274321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.738 [2024-06-10 12:09:06.283607] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.738 [2024-06-10 12:09:06.284157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.738 [2024-06-10 12:09:06.284527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.738 [2024-06-10 12:09:06.284537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.738 [2024-06-10 12:09:06.284545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.738 [2024-06-10 12:09:06.284688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.738 [2024-06-10 12:09:06.284813] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.284821] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.284828] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.739 [2024-06-10 12:09:06.287100] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.739 [2024-06-10 12:09:06.296406] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.739 [2024-06-10 12:09:06.297044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.297421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.297435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.739 [2024-06-10 12:09:06.297444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.739 [2024-06-10 12:09:06.297613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.739 [2024-06-10 12:09:06.297763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.297771] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.297778] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.739 [2024-06-10 12:09:06.300230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.739 [2024-06-10 12:09:06.308858] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.739 [2024-06-10 12:09:06.309405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.309780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.309789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.739 [2024-06-10 12:09:06.309797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.739 [2024-06-10 12:09:06.309947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.739 [2024-06-10 12:09:06.310096] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.310104] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.310111] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.739 [2024-06-10 12:09:06.312259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.739 [2024-06-10 12:09:06.321514] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.739 [2024-06-10 12:09:06.322097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.322392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.322408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.739 [2024-06-10 12:09:06.322417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.739 [2024-06-10 12:09:06.322583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.739 [2024-06-10 12:09:06.322736] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.322744] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.322751] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.739 [2024-06-10 12:09:06.325091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.739 [2024-06-10 12:09:06.334035] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.739 [2024-06-10 12:09:06.334744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.335110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.335123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.739 [2024-06-10 12:09:06.335132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.739 [2024-06-10 12:09:06.335337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.739 [2024-06-10 12:09:06.335493] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.335501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.335508] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.739 [2024-06-10 12:09:06.338077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.739 [2024-06-10 12:09:06.346579] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.739 [2024-06-10 12:09:06.347230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.347595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.347607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.739 [2024-06-10 12:09:06.347617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.739 [2024-06-10 12:09:06.347783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.739 [2024-06-10 12:09:06.347976] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.347984] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.347992] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.739 [2024-06-10 12:09:06.350339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.739 [2024-06-10 12:09:06.359284] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.739 [2024-06-10 12:09:06.359952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.360343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.360357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.739 [2024-06-10 12:09:06.360366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.739 [2024-06-10 12:09:06.360492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.739 [2024-06-10 12:09:06.360641] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.360649] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.360657] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.739 [2024-06-10 12:09:06.362879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.739 [2024-06-10 12:09:06.371824] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.739 [2024-06-10 12:09:06.372220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.372576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.372586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.739 [2024-06-10 12:09:06.372598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.739 [2024-06-10 12:09:06.372708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.739 [2024-06-10 12:09:06.372799] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.372807] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.372814] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.739 [2024-06-10 12:09:06.374991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.739 [2024-06-10 12:09:06.384503] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.739 [2024-06-10 12:09:06.385038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.385493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.385530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.739 [2024-06-10 12:09:06.385541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.739 [2024-06-10 12:09:06.385732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.739 [2024-06-10 12:09:06.385864] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.385872] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.385880] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.739 [2024-06-10 12:09:06.388332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.739 [2024-06-10 12:09:06.397165] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.739 [2024-06-10 12:09:06.397712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.398086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.739 [2024-06-10 12:09:06.398099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.739 [2024-06-10 12:09:06.398108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.739 [2024-06-10 12:09:06.398306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.739 [2024-06-10 12:09:06.398438] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.739 [2024-06-10 12:09:06.398446] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.739 [2024-06-10 12:09:06.398453] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.740 [2024-06-10 12:09:06.400828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.740 [2024-06-10 12:09:06.409870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.740 [2024-06-10 12:09:06.410323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.410743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.410756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.740 [2024-06-10 12:09:06.410765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.740 [2024-06-10 12:09:06.410916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.740 [2024-06-10 12:09:06.411107] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.740 [2024-06-10 12:09:06.411115] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.740 [2024-06-10 12:09:06.411122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.740 [2024-06-10 12:09:06.413308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.740 [2024-06-10 12:09:06.422156] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.740 [2024-06-10 12:09:06.422689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.423076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.423089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.740 [2024-06-10 12:09:06.423098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.740 [2024-06-10 12:09:06.423278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.740 [2024-06-10 12:09:06.423453] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.740 [2024-06-10 12:09:06.423461] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.740 [2024-06-10 12:09:06.423469] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.740 [2024-06-10 12:09:06.425727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.740 [2024-06-10 12:09:06.434543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.740 [2024-06-10 12:09:06.435129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.435513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.435527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.740 [2024-06-10 12:09:06.435536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.740 [2024-06-10 12:09:06.435646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.740 [2024-06-10 12:09:06.435740] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.740 [2024-06-10 12:09:06.435748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.740 [2024-06-10 12:09:06.435756] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.740 [2024-06-10 12:09:06.438083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.740 [2024-06-10 12:09:06.447068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.740 [2024-06-10 12:09:06.447682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.448057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.448069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.740 [2024-06-10 12:09:06.448079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.740 [2024-06-10 12:09:06.448296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.740 [2024-06-10 12:09:06.448432] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.740 [2024-06-10 12:09:06.448440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.740 [2024-06-10 12:09:06.448448] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.740 [2024-06-10 12:09:06.450751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.740 [2024-06-10 12:09:06.459733] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.740 [2024-06-10 12:09:06.460249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.460597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.460607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.740 [2024-06-10 12:09:06.460615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.740 [2024-06-10 12:09:06.460762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.740 [2024-06-10 12:09:06.460893] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.740 [2024-06-10 12:09:06.460901] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.740 [2024-06-10 12:09:06.460908] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.740 [2024-06-10 12:09:06.463133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.740 [2024-06-10 12:09:06.472260] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.740 [2024-06-10 12:09:06.472749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.473130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.473142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.740 [2024-06-10 12:09:06.473151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.740 [2024-06-10 12:09:06.473307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.740 [2024-06-10 12:09:06.473457] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.740 [2024-06-10 12:09:06.473466] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.740 [2024-06-10 12:09:06.473473] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.740 [2024-06-10 12:09:06.475869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.740 [2024-06-10 12:09:06.484854] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.740 [2024-06-10 12:09:06.485216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.485570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.485581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.740 [2024-06-10 12:09:06.485589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.740 [2024-06-10 12:09:06.485802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.740 [2024-06-10 12:09:06.485933] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.740 [2024-06-10 12:09:06.485945] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.740 [2024-06-10 12:09:06.485952] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.740 [2024-06-10 12:09:06.488203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.740 [2024-06-10 12:09:06.497713] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.740 [2024-06-10 12:09:06.498252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.498603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.740 [2024-06-10 12:09:06.498613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:12.740 [2024-06-10 12:09:06.498620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:12.740 [2024-06-10 12:09:06.498709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:12.740 [2024-06-10 12:09:06.498876] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.740 [2024-06-10 12:09:06.498884] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.740 [2024-06-10 12:09:06.498891] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.740 [2024-06-10 12:09:06.501196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.002 [2024-06-10 12:09:06.510312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.002 [2024-06-10 12:09:06.510898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.002 [2024-06-10 12:09:06.511280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.002 [2024-06-10 12:09:06.511294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.002 [2024-06-10 12:09:06.511304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.002 [2024-06-10 12:09:06.511476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.002 [2024-06-10 12:09:06.511607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.002 [2024-06-10 12:09:06.511615] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.002 [2024-06-10 12:09:06.511623] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.002 [2024-06-10 12:09:06.513849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.002 [2024-06-10 12:09:06.523205] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.002 [2024-06-10 12:09:06.523702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.002 [2024-06-10 12:09:06.524055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.002 [2024-06-10 12:09:06.524065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.002 [2024-06-10 12:09:06.524073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.002 [2024-06-10 12:09:06.524165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.002 [2024-06-10 12:09:06.524321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.002 [2024-06-10 12:09:06.524330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.002 [2024-06-10 12:09:06.524341] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.002 [2024-06-10 12:09:06.526540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.002 [2024-06-10 12:09:06.535814] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.002 [2024-06-10 12:09:06.536347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.002 [2024-06-10 12:09:06.536776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.536785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.003 [2024-06-10 12:09:06.536792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.003 [2024-06-10 12:09:06.536957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.003 [2024-06-10 12:09:06.537146] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.003 [2024-06-10 12:09:06.537154] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.003 [2024-06-10 12:09:06.537161] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.003 [2024-06-10 12:09:06.539581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.003 [2024-06-10 12:09:06.548441] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.003 [2024-06-10 12:09:06.549045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.549426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.549441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.003 [2024-06-10 12:09:06.549450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.003 [2024-06-10 12:09:06.549658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.003 [2024-06-10 12:09:06.549873] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.003 [2024-06-10 12:09:06.549881] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.003 [2024-06-10 12:09:06.549888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.003 [2024-06-10 12:09:06.552467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.003 [2024-06-10 12:09:06.561249] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.003 [2024-06-10 12:09:06.561794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.562066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.562075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.003 [2024-06-10 12:09:06.562083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.003 [2024-06-10 12:09:06.562271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.003 [2024-06-10 12:09:06.562416] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.003 [2024-06-10 12:09:06.562423] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.003 [2024-06-10 12:09:06.562430] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.003 [2024-06-10 12:09:06.564707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.003 [2024-06-10 12:09:06.573806] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.003 [2024-06-10 12:09:06.574488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.574864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.574877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.003 [2024-06-10 12:09:06.574886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.003 [2024-06-10 12:09:06.575036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.003 [2024-06-10 12:09:06.575210] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.003 [2024-06-10 12:09:06.575218] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.003 [2024-06-10 12:09:06.575226] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.003 [2024-06-10 12:09:06.577800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.003 [2024-06-10 12:09:06.586476] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.003 [2024-06-10 12:09:06.587103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.587496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.587511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.003 [2024-06-10 12:09:06.587520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.003 [2024-06-10 12:09:06.587692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.003 [2024-06-10 12:09:06.587900] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.003 [2024-06-10 12:09:06.587908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.003 [2024-06-10 12:09:06.587916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.003 [2024-06-10 12:09:06.590320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.003 [2024-06-10 12:09:06.599024] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.003 [2024-06-10 12:09:06.599593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.599968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.599981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.003 [2024-06-10 12:09:06.599990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.003 [2024-06-10 12:09:06.600119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.003 [2024-06-10 12:09:06.600284] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.003 [2024-06-10 12:09:06.600293] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.003 [2024-06-10 12:09:06.600300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.003 [2024-06-10 12:09:06.602620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.003 [2024-06-10 12:09:06.611551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.003 [2024-06-10 12:09:06.612203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.612619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.003 [2024-06-10 12:09:06.612633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.003 [2024-06-10 12:09:06.612643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.003 [2024-06-10 12:09:06.612814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.003 [2024-06-10 12:09:06.612924] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.003 [2024-06-10 12:09:06.612933] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.003 [2024-06-10 12:09:06.612940] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.003 [2024-06-10 12:09:06.615298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.004 [2024-06-10 12:09:06.624146] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.004 [2024-06-10 12:09:06.624628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.624977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.624987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.004 [2024-06-10 12:09:06.624995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.004 [2024-06-10 12:09:06.625148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.004 [2024-06-10 12:09:06.625279] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.004 [2024-06-10 12:09:06.625287] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.004 [2024-06-10 12:09:06.625294] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.004 [2024-06-10 12:09:06.627657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.004 [2024-06-10 12:09:06.636681] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.004 [2024-06-10 12:09:06.637210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.637561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.637572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.004 [2024-06-10 12:09:06.637580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.004 [2024-06-10 12:09:06.637668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.004 [2024-06-10 12:09:06.637758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.004 [2024-06-10 12:09:06.637766] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.004 [2024-06-10 12:09:06.637772] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.004 [2024-06-10 12:09:06.640173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.004 [2024-06-10 12:09:06.649363] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.004 [2024-06-10 12:09:06.649857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.650207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.650217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.004 [2024-06-10 12:09:06.650224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.004 [2024-06-10 12:09:06.650400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.004 [2024-06-10 12:09:06.650510] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.004 [2024-06-10 12:09:06.650517] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.004 [2024-06-10 12:09:06.650524] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.004 [2024-06-10 12:09:06.652905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.004 [2024-06-10 12:09:06.661861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.004 [2024-06-10 12:09:06.662402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.662771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.662781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.004 [2024-06-10 12:09:06.662788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.004 [2024-06-10 12:09:06.662897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.004 [2024-06-10 12:09:06.663003] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.004 [2024-06-10 12:09:06.663010] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.004 [2024-06-10 12:09:06.663017] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.004 [2024-06-10 12:09:06.665477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.004 [2024-06-10 12:09:06.674566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.004 [2024-06-10 12:09:06.675267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.675640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.675653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.004 [2024-06-10 12:09:06.675662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.004 [2024-06-10 12:09:06.675812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.004 [2024-06-10 12:09:06.675925] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.004 [2024-06-10 12:09:06.675933] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.004 [2024-06-10 12:09:06.675940] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.004 [2024-06-10 12:09:06.678224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.004 [2024-06-10 12:09:06.687378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.004 [2024-06-10 12:09:06.687995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.688364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.688382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.004 [2024-06-10 12:09:06.688391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.004 [2024-06-10 12:09:06.688557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.004 [2024-06-10 12:09:06.688707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.004 [2024-06-10 12:09:06.688715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.004 [2024-06-10 12:09:06.688723] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.004 [2024-06-10 12:09:06.691048] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.004 [2024-06-10 12:09:06.699835] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.004 [2024-06-10 12:09:06.700332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.700694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.004 [2024-06-10 12:09:06.700703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.004 [2024-06-10 12:09:06.700711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.004 [2024-06-10 12:09:06.700845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.004 [2024-06-10 12:09:06.701040] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.004 [2024-06-10 12:09:06.701048] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.005 [2024-06-10 12:09:06.701055] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.005 [2024-06-10 12:09:06.703427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.005 [2024-06-10 12:09:06.712459] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.005 [2024-06-10 12:09:06.712897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.713345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.713360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.005 [2024-06-10 12:09:06.713369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.005 [2024-06-10 12:09:06.713541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.005 [2024-06-10 12:09:06.713691] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.005 [2024-06-10 12:09:06.713699] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.005 [2024-06-10 12:09:06.713706] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.005 [2024-06-10 12:09:06.715936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.005 [2024-06-10 12:09:06.725112] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.005 [2024-06-10 12:09:06.725723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.726095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.726107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.005 [2024-06-10 12:09:06.726124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.005 [2024-06-10 12:09:06.726284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.005 [2024-06-10 12:09:06.726418] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.005 [2024-06-10 12:09:06.726426] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.005 [2024-06-10 12:09:06.726433] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.005 [2024-06-10 12:09:06.728739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.005 [2024-06-10 12:09:06.737753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.005 [2024-06-10 12:09:06.738281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.738630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.738639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.005 [2024-06-10 12:09:06.738647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.005 [2024-06-10 12:09:06.738796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.005 [2024-06-10 12:09:06.738946] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.005 [2024-06-10 12:09:06.738954] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.005 [2024-06-10 12:09:06.738960] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.005 [2024-06-10 12:09:06.741252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.005 [2024-06-10 12:09:06.750194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.005 [2024-06-10 12:09:06.750686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.750973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.750982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.005 [2024-06-10 12:09:06.750989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.005 [2024-06-10 12:09:06.751191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.005 [2024-06-10 12:09:06.751384] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.005 [2024-06-10 12:09:06.751392] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.005 [2024-06-10 12:09:06.751399] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.005 [2024-06-10 12:09:06.753756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.005 [2024-06-10 12:09:06.762701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.005 [2024-06-10 12:09:06.763201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.763549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.005 [2024-06-10 12:09:06.763559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.005 [2024-06-10 12:09:06.763566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.005 [2024-06-10 12:09:06.763661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.005 [2024-06-10 12:09:06.763792] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.005 [2024-06-10 12:09:06.763799] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.005 [2024-06-10 12:09:06.763806] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.005 [2024-06-10 12:09:06.766013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.267 [2024-06-10 12:09:06.775488] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.267 [2024-06-10 12:09:06.776070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.776445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.776459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.267 [2024-06-10 12:09:06.776469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.267 [2024-06-10 12:09:06.776638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.267 [2024-06-10 12:09:06.776788] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.267 [2024-06-10 12:09:06.776796] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.267 [2024-06-10 12:09:06.776803] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.267 [2024-06-10 12:09:06.778936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.267 [2024-06-10 12:09:06.788183] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.267 [2024-06-10 12:09:06.788757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.789132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.789145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.267 [2024-06-10 12:09:06.789154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.267 [2024-06-10 12:09:06.789341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.267 [2024-06-10 12:09:06.789492] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.267 [2024-06-10 12:09:06.789500] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.267 [2024-06-10 12:09:06.789507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.267 [2024-06-10 12:09:06.791854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.267 [2024-06-10 12:09:06.800715] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.267 [2024-06-10 12:09:06.801333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.801715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.801728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.267 [2024-06-10 12:09:06.801737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.267 [2024-06-10 12:09:06.801865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.267 [2024-06-10 12:09:06.802084] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.267 [2024-06-10 12:09:06.802092] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.267 [2024-06-10 12:09:06.802099] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.267 [2024-06-10 12:09:06.804599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.267 [2024-06-10 12:09:06.813322] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.267 [2024-06-10 12:09:06.813975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.814351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.814365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.267 [2024-06-10 12:09:06.814375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.267 [2024-06-10 12:09:06.814525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.267 [2024-06-10 12:09:06.814678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.267 [2024-06-10 12:09:06.814686] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.267 [2024-06-10 12:09:06.814693] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.267 [2024-06-10 12:09:06.816987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.267 [2024-06-10 12:09:06.825809] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.267 [2024-06-10 12:09:06.826361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.826713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.267 [2024-06-10 12:09:06.826723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.268 [2024-06-10 12:09:06.826731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.268 [2024-06-10 12:09:06.826881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.268 [2024-06-10 12:09:06.827054] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.268 [2024-06-10 12:09:06.827062] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.268 [2024-06-10 12:09:06.827069] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.268 [2024-06-10 12:09:06.829588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.268 [2024-06-10 12:09:06.838281] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.268 [2024-06-10 12:09:06.838754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.839101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.839110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.268 [2024-06-10 12:09:06.839117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.268 [2024-06-10 12:09:06.839297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.268 [2024-06-10 12:09:06.839404] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.268 [2024-06-10 12:09:06.839415] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.268 [2024-06-10 12:09:06.839422] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.268 [2024-06-10 12:09:06.841531] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.268 [2024-06-10 12:09:06.851053] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.268 [2024-06-10 12:09:06.851589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.851908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.851921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.268 [2024-06-10 12:09:06.851930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.268 [2024-06-10 12:09:06.852102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.268 [2024-06-10 12:09:06.852262] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.268 [2024-06-10 12:09:06.852271] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.268 [2024-06-10 12:09:06.852278] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.268 [2024-06-10 12:09:06.854574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.268 [2024-06-10 12:09:06.863766] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.268 [2024-06-10 12:09:06.864345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.864792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.864805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.268 [2024-06-10 12:09:06.864814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.268 [2024-06-10 12:09:06.864961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.268 [2024-06-10 12:09:06.865098] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.268 [2024-06-10 12:09:06.865106] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.268 [2024-06-10 12:09:06.865114] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.268 [2024-06-10 12:09:06.867641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.268 [2024-06-10 12:09:06.876635] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.268 [2024-06-10 12:09:06.877256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.877645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.877661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.268 [2024-06-10 12:09:06.877670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.268 [2024-06-10 12:09:06.877882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.268 [2024-06-10 12:09:06.878013] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.268 [2024-06-10 12:09:06.878021] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.268 [2024-06-10 12:09:06.878033] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.268 [2024-06-10 12:09:06.880420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.268 [2024-06-10 12:09:06.889256] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.268 [2024-06-10 12:09:06.889793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.890148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.890157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.268 [2024-06-10 12:09:06.890165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.268 [2024-06-10 12:09:06.890279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.268 [2024-06-10 12:09:06.890368] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.268 [2024-06-10 12:09:06.890375] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.268 [2024-06-10 12:09:06.890382] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.268 [2024-06-10 12:09:06.892881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.268 [2024-06-10 12:09:06.901860] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.268 [2024-06-10 12:09:06.902543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.902828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.902842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.268 [2024-06-10 12:09:06.902851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.268 [2024-06-10 12:09:06.903039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.268 [2024-06-10 12:09:06.903189] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.268 [2024-06-10 12:09:06.903197] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.268 [2024-06-10 12:09:06.903205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.268 [2024-06-10 12:09:06.905783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.268 [2024-06-10 12:09:06.914415] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.268 [2024-06-10 12:09:06.915069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.915443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.915457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.268 [2024-06-10 12:09:06.915466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.268 [2024-06-10 12:09:06.915632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.268 [2024-06-10 12:09:06.915763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.268 [2024-06-10 12:09:06.915771] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.268 [2024-06-10 12:09:06.915779] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.268 [2024-06-10 12:09:06.918258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.268 [2024-06-10 12:09:06.926991] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.268 [2024-06-10 12:09:06.927630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.928005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.928018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.268 [2024-06-10 12:09:06.928027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.268 [2024-06-10 12:09:06.928193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.268 [2024-06-10 12:09:06.928352] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.268 [2024-06-10 12:09:06.928361] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.268 [2024-06-10 12:09:06.928368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.268 [2024-06-10 12:09:06.930572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2148068 Killed "${NVMF_APP[@]}" "$@" 00:31:13.268 12:09:06 -- host/bdevperf.sh@36 -- # tgt_init 00:31:13.268 12:09:06 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:13.268 12:09:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:13.268 12:09:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:13.268 12:09:06 -- common/autotest_common.sh@10 -- # set +x 00:31:13.268 [2024-06-10 12:09:06.939452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.268 [2024-06-10 12:09:06.939930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.940317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.268 [2024-06-10 12:09:06.940331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.269 [2024-06-10 12:09:06.940340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.269 [2024-06-10 12:09:06.940549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.269 [2024-06-10 12:09:06.940662] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.269 [2024-06-10 12:09:06.940670] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.269 [2024-06-10 12:09:06.940677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.269 12:09:06 -- nvmf/common.sh@469 -- # nvmfpid=2149875 00:31:13.269 12:09:06 -- nvmf/common.sh@470 -- # waitforlisten 2149875 00:31:13.269 12:09:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:13.269 12:09:06 -- common/autotest_common.sh@819 -- # '[' -z 2149875 ']' 00:31:13.269 12:09:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.269 12:09:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:13.269 12:09:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.269 12:09:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:13.269 12:09:06 -- common/autotest_common.sh@10 -- # set +x 00:31:13.269 [2024-06-10 12:09:06.943054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.269 [2024-06-10 12:09:06.952206] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.269 [2024-06-10 12:09:06.952764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:06.953026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:06.953036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.269 [2024-06-10 12:09:06.953044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.269 [2024-06-10 12:09:06.953228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.269 [2024-06-10 12:09:06.953326] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.269 [2024-06-10 12:09:06.953335] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.269 [2024-06-10 12:09:06.953342] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.269 [2024-06-10 12:09:06.955668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.269 [2024-06-10 12:09:06.964832] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.269 [2024-06-10 12:09:06.965334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:06.965688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:06.965699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.269 [2024-06-10 12:09:06.965706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.269 [2024-06-10 12:09:06.965854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.269 [2024-06-10 12:09:06.965984] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.269 [2024-06-10 12:09:06.965993] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.269 [2024-06-10 12:09:06.966000] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.269 [2024-06-10 12:09:06.968339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.269 [2024-06-10 12:09:06.977557] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.269 [2024-06-10 12:09:06.978110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:06.978463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:06.978474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.269 [2024-06-10 12:09:06.978481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.269 [2024-06-10 12:09:06.978689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.269 [2024-06-10 12:09:06.978817] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.269 [2024-06-10 12:09:06.978825] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.269 [2024-06-10 12:09:06.978832] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.269 [2024-06-10 12:09:06.981255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.269 [2024-06-10 12:09:06.987339] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:13.269 [2024-06-10 12:09:06.987382] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.269 [2024-06-10 12:09:06.990053] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.269 [2024-06-10 12:09:06.990572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:06.990961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:06.990971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.269 [2024-06-10 12:09:06.990979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.269 [2024-06-10 12:09:06.991151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.269 [2024-06-10 12:09:06.991309] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.269 [2024-06-10 12:09:06.991317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.269 [2024-06-10 12:09:06.991324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.269 [2024-06-10 12:09:06.993799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.269 [2024-06-10 12:09:07.002625] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.269 [2024-06-10 12:09:07.003285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:07.003699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:07.003712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.269 [2024-06-10 12:09:07.003721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.269 [2024-06-10 12:09:07.003896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.269 [2024-06-10 12:09:07.004047] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.269 [2024-06-10 12:09:07.004055] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.269 [2024-06-10 12:09:07.004062] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.269 [2024-06-10 12:09:07.006410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.269 [2024-06-10 12:09:07.015071] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.269 [2024-06-10 12:09:07.015571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:07.015897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:07.015911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.269 [2024-06-10 12:09:07.015921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.269 [2024-06-10 12:09:07.016155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.269 [2024-06-10 12:09:07.016290] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.269 [2024-06-10 12:09:07.016298] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.269 [2024-06-10 12:09:07.016305] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.269 [2024-06-10 12:09:07.018810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.269 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.269 [2024-06-10 12:09:07.027615] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.269 [2024-06-10 12:09:07.028228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:07.028621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.269 [2024-06-10 12:09:07.028634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.269 [2024-06-10 12:09:07.028644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.269 [2024-06-10 12:09:07.028833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.269 [2024-06-10 12:09:07.028945] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.269 [2024-06-10 12:09:07.028954] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.269 [2024-06-10 12:09:07.028961] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.269 [2024-06-10 12:09:07.031375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.532 [2024-06-10 12:09:07.040263] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.532 [2024-06-10 12:09:07.040842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.041152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.041162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.532 [2024-06-10 12:09:07.041170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.532 [2024-06-10 12:09:07.041288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.532 [2024-06-10 12:09:07.041436] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.532 [2024-06-10 12:09:07.041444] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.532 [2024-06-10 12:09:07.041450] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.532 [2024-06-10 12:09:07.043746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.532 [2024-06-10 12:09:07.052830] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.532 [2024-06-10 12:09:07.053336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.053686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.053696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.532 [2024-06-10 12:09:07.053703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.532 [2024-06-10 12:09:07.053816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.532 [2024-06-10 12:09:07.053963] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.532 [2024-06-10 12:09:07.053970] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.532 [2024-06-10 12:09:07.053977] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.532 [2024-06-10 12:09:07.056434] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.532 [2024-06-10 12:09:07.065321] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.532 [2024-06-10 12:09:07.065983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.066390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.066405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.532 [2024-06-10 12:09:07.066415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.532 [2024-06-10 12:09:07.066581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.532 [2024-06-10 12:09:07.066697] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.532 [2024-06-10 12:09:07.066705] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.532 [2024-06-10 12:09:07.066713] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.532 [2024-06-10 12:09:07.069215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.532 [2024-06-10 12:09:07.071016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:13.532 [2024-06-10 12:09:07.077911] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.532 [2024-06-10 12:09:07.078441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.078672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.078689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.532 [2024-06-10 12:09:07.078697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.532 [2024-06-10 12:09:07.078870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.532 [2024-06-10 12:09:07.079002] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.532 [2024-06-10 12:09:07.079009] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.532 [2024-06-10 12:09:07.079017] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.532 [2024-06-10 12:09:07.081540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.532 [2024-06-10 12:09:07.090545] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.532 [2024-06-10 12:09:07.091052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.091383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.532 [2024-06-10 12:09:07.091394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.532 [2024-06-10 12:09:07.091402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.533 [2024-06-10 12:09:07.091571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.533 [2024-06-10 12:09:07.091740] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.533 [2024-06-10 12:09:07.091748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.533 [2024-06-10 12:09:07.091754] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.533 [2024-06-10 12:09:07.094209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.533 [2024-06-10 12:09:07.103031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.533 [2024-06-10 12:09:07.103589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.103940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.103955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.533 [2024-06-10 12:09:07.103964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.533 [2024-06-10 12:09:07.104114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.533 [2024-06-10 12:09:07.104251] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.533 [2024-06-10 12:09:07.104260] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.533 [2024-06-10 12:09:07.104267] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.533 [2024-06-10 12:09:07.106484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.533 [2024-06-10 12:09:07.115494] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.533 [2024-06-10 12:09:07.115976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.116332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.116342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.533 [2024-06-10 12:09:07.116349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.533 [2024-06-10 12:09:07.116518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.533 [2024-06-10 12:09:07.116624] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.533 [2024-06-10 12:09:07.116631] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.533 [2024-06-10 12:09:07.116638] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.533 [2024-06-10 12:09:07.119153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.533 [2024-06-10 12:09:07.123705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:13.533 [2024-06-10 12:09:07.123789] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.533 [2024-06-10 12:09:07.123795] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.533 [2024-06-10 12:09:07.123800] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.533 [2024-06-10 12:09:07.123916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.533 [2024-06-10 12:09:07.124068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.533 [2024-06-10 12:09:07.124070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.533 [2024-06-10 12:09:07.128288] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.533 [2024-06-10 12:09:07.128633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.128884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.128893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.533 [2024-06-10 12:09:07.128901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.533 [2024-06-10 12:09:07.129069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.533 [2024-06-10 12:09:07.129238] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.533 [2024-06-10 12:09:07.129250] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.533 [2024-06-10 12:09:07.129261] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.533 [2024-06-10 12:09:07.131672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.533 [2024-06-10 12:09:07.141099] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.533 [2024-06-10 12:09:07.141702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.142104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.142117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.533 [2024-06-10 12:09:07.142127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.533 [2024-06-10 12:09:07.142264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.533 [2024-06-10 12:09:07.142378] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.533 [2024-06-10 12:09:07.142386] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.533 [2024-06-10 12:09:07.142393] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.533 [2024-06-10 12:09:07.144873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.533 [2024-06-10 12:09:07.153809] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.533 [2024-06-10 12:09:07.154323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.154540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.154550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.533 [2024-06-10 12:09:07.154557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.533 [2024-06-10 12:09:07.154627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.533 [2024-06-10 12:09:07.154773] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.533 [2024-06-10 12:09:07.154781] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.533 [2024-06-10 12:09:07.154788] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.533 [2024-06-10 12:09:07.157083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.533 [2024-06-10 12:09:07.166728] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.533 [2024-06-10 12:09:07.167268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.167647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.167657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.533 [2024-06-10 12:09:07.167665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.533 [2024-06-10 12:09:07.167835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.533 [2024-06-10 12:09:07.167945] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.533 [2024-06-10 12:09:07.167953] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.533 [2024-06-10 12:09:07.167965] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.533 [2024-06-10 12:09:07.170190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.533 [2024-06-10 12:09:07.179212] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.533 [2024-06-10 12:09:07.179720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.179977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.179996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.533 [2024-06-10 12:09:07.180006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.533 [2024-06-10 12:09:07.180199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.533 [2024-06-10 12:09:07.180364] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.533 [2024-06-10 12:09:07.180373] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.533 [2024-06-10 12:09:07.180380] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.533 [2024-06-10 12:09:07.182823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.533 [2024-06-10 12:09:07.191846] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.533 [2024-06-10 12:09:07.192344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.192788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.192801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.533 [2024-06-10 12:09:07.192811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.533 [2024-06-10 12:09:07.192940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.533 [2024-06-10 12:09:07.193112] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.533 [2024-06-10 12:09:07.193120] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.533 [2024-06-10 12:09:07.193127] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.533 [2024-06-10 12:09:07.195402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.533 [2024-06-10 12:09:07.204730] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.533 [2024-06-10 12:09:07.205339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.205734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.533 [2024-06-10 12:09:07.205746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.534 [2024-06-10 12:09:07.205756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.534 [2024-06-10 12:09:07.205925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.534 [2024-06-10 12:09:07.206137] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.534 [2024-06-10 12:09:07.206146] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.534 [2024-06-10 12:09:07.206154] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.534 [2024-06-10 12:09:07.208442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.534 [2024-06-10 12:09:07.217170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.534 [2024-06-10 12:09:07.217854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.218269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.218284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.534 [2024-06-10 12:09:07.218293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.534 [2024-06-10 12:09:07.218481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.534 [2024-06-10 12:09:07.218613] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.534 [2024-06-10 12:09:07.218621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.534 [2024-06-10 12:09:07.218629] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.534 [2024-06-10 12:09:07.221138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.534 [2024-06-10 12:09:07.229677] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.534 [2024-06-10 12:09:07.230174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.230679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.230694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.534 [2024-06-10 12:09:07.230704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.534 [2024-06-10 12:09:07.230891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.534 [2024-06-10 12:09:07.231007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.534 [2024-06-10 12:09:07.231015] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.534 [2024-06-10 12:09:07.231023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.534 [2024-06-10 12:09:07.233325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.534 [2024-06-10 12:09:07.242356] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.534 [2024-06-10 12:09:07.242964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.243370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.243385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.534 [2024-06-10 12:09:07.243394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.534 [2024-06-10 12:09:07.243603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.534 [2024-06-10 12:09:07.243772] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.534 [2024-06-10 12:09:07.243780] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.534 [2024-06-10 12:09:07.243787] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.534 [2024-06-10 12:09:07.246174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.534 [2024-06-10 12:09:07.255022] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.534 [2024-06-10 12:09:07.255644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.256042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.256052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.534 [2024-06-10 12:09:07.256060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.534 [2024-06-10 12:09:07.256212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.534 [2024-06-10 12:09:07.256351] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.534 [2024-06-10 12:09:07.256359] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.534 [2024-06-10 12:09:07.256366] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.534 [2024-06-10 12:09:07.258533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.534 [2024-06-10 12:09:07.267610] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.534 [2024-06-10 12:09:07.268126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.268503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.268513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.534 [2024-06-10 12:09:07.268520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.534 [2024-06-10 12:09:07.268689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.534 [2024-06-10 12:09:07.268844] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.534 [2024-06-10 12:09:07.268852] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.534 [2024-06-10 12:09:07.268860] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.534 [2024-06-10 12:09:07.271436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.534 [2024-06-10 12:09:07.280193] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.534 [2024-06-10 12:09:07.280701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.281075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.281084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.534 [2024-06-10 12:09:07.281091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.534 [2024-06-10 12:09:07.281201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.534 [2024-06-10 12:09:07.281376] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.534 [2024-06-10 12:09:07.281384] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.534 [2024-06-10 12:09:07.281392] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.534 [2024-06-10 12:09:07.283513] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.534 [2024-06-10 12:09:07.293074] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.534 [2024-06-10 12:09:07.293700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.294091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.534 [2024-06-10 12:09:07.294104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.534 [2024-06-10 12:09:07.294114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.534 [2024-06-10 12:09:07.294290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.534 [2024-06-10 12:09:07.294444] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.534 [2024-06-10 12:09:07.294452] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.534 [2024-06-10 12:09:07.294459] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.534 [2024-06-10 12:09:07.296811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.798 [2024-06-10 12:09:07.305635] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.798 [2024-06-10 12:09:07.306144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.306503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.306514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.798 [2024-06-10 12:09:07.306521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.798 [2024-06-10 12:09:07.306691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.798 [2024-06-10 12:09:07.306867] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.798 [2024-06-10 12:09:07.306875] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.798 [2024-06-10 12:09:07.306882] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.798 [2024-06-10 12:09:07.309113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.798 [2024-06-10 12:09:07.318131] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.798 [2024-06-10 12:09:07.318495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.318731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.318746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.798 [2024-06-10 12:09:07.318754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.798 [2024-06-10 12:09:07.318883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.798 [2024-06-10 12:09:07.319049] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.798 [2024-06-10 12:09:07.319056] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.798 [2024-06-10 12:09:07.319063] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.798 [2024-06-10 12:09:07.321571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.798 [2024-06-10 12:09:07.330786] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.798 [2024-06-10 12:09:07.331344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.331741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.331753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.798 [2024-06-10 12:09:07.331772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.798 [2024-06-10 12:09:07.331898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.798 [2024-06-10 12:09:07.332013] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.798 [2024-06-10 12:09:07.332021] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.798 [2024-06-10 12:09:07.332028] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.798 [2024-06-10 12:09:07.334425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.798 [2024-06-10 12:09:07.343363] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.798 [2024-06-10 12:09:07.344004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.344239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.344260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.798 [2024-06-10 12:09:07.344269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.798 [2024-06-10 12:09:07.344457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.798 [2024-06-10 12:09:07.344611] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.798 [2024-06-10 12:09:07.344619] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.798 [2024-06-10 12:09:07.344626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.798 [2024-06-10 12:09:07.347055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.798 [2024-06-10 12:09:07.355978] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.798 [2024-06-10 12:09:07.356641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.356897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.356910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.798 [2024-06-10 12:09:07.356920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.798 [2024-06-10 12:09:07.357150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.798 [2024-06-10 12:09:07.357352] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.798 [2024-06-10 12:09:07.357361] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.798 [2024-06-10 12:09:07.357368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.798 [2024-06-10 12:09:07.359902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.798 [2024-06-10 12:09:07.368666] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.798 [2024-06-10 12:09:07.369191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.369571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.369583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.798 [2024-06-10 12:09:07.369590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.798 [2024-06-10 12:09:07.369745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.798 [2024-06-10 12:09:07.369834] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.798 [2024-06-10 12:09:07.369842] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.798 [2024-06-10 12:09:07.369849] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.798 [2024-06-10 12:09:07.372013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.798 [2024-06-10 12:09:07.381312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.798 [2024-06-10 12:09:07.381958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.382354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.382369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.798 [2024-06-10 12:09:07.382378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.798 [2024-06-10 12:09:07.382503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.798 [2024-06-10 12:09:07.382617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.798 [2024-06-10 12:09:07.382625] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.798 [2024-06-10 12:09:07.382632] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.798 [2024-06-10 12:09:07.385054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.798 [2024-06-10 12:09:07.393949] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.798 [2024-06-10 12:09:07.394579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.394808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-06-10 12:09:07.394821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.798 [2024-06-10 12:09:07.394830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.798 [2024-06-10 12:09:07.394959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.798 [2024-06-10 12:09:07.395109] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.798 [2024-06-10 12:09:07.395117] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.395124] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.397315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.406692] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.406926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.407269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.407281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.799 [2024-06-10 12:09:07.407289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.799 [2024-06-10 12:09:07.407433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.799 [2024-06-10 12:09:07.407569] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.799 [2024-06-10 12:09:07.407577] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.407584] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.409977] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.419265] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.419734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.420001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.420010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.799 [2024-06-10 12:09:07.420018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.799 [2024-06-10 12:09:07.420186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.799 [2024-06-10 12:09:07.420303] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.799 [2024-06-10 12:09:07.420312] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.420318] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.422661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.431909] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.432570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.432957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.432970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.799 [2024-06-10 12:09:07.432979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.799 [2024-06-10 12:09:07.433107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.799 [2024-06-10 12:09:07.433253] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.799 [2024-06-10 12:09:07.433261] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.433269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.435595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.444575] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.444970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.445325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.445335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.799 [2024-06-10 12:09:07.445343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.799 [2024-06-10 12:09:07.445493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.799 [2024-06-10 12:09:07.445621] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.799 [2024-06-10 12:09:07.445634] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.445642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.447922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.457211] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.457761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.458117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.458127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.799 [2024-06-10 12:09:07.458134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.799 [2024-06-10 12:09:07.458266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.799 [2024-06-10 12:09:07.458435] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.799 [2024-06-10 12:09:07.458442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.458450] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.460784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.469786] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.470304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.470720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.470729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.799 [2024-06-10 12:09:07.470737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.799 [2024-06-10 12:09:07.470849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.799 [2024-06-10 12:09:07.470973] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.799 [2024-06-10 12:09:07.470981] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.470988] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.473322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.482371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.482913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.483280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.483300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.799 [2024-06-10 12:09:07.483308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.799 [2024-06-10 12:09:07.483461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.799 [2024-06-10 12:09:07.483625] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.799 [2024-06-10 12:09:07.483632] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.483643] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.486045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.495021] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.495653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.496044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.496057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.799 [2024-06-10 12:09:07.496066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.799 [2024-06-10 12:09:07.496241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.799 [2024-06-10 12:09:07.496381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.799 [2024-06-10 12:09:07.496389] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.496396] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.498809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.507806] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.508195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.508412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.508422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.799 [2024-06-10 12:09:07.508430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.799 [2024-06-10 12:09:07.508583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.799 [2024-06-10 12:09:07.508712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.799 [2024-06-10 12:09:07.508719] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.799 [2024-06-10 12:09:07.508726] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.799 [2024-06-10 12:09:07.511011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.799 [2024-06-10 12:09:07.520379] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.799 [2024-06-10 12:09:07.520980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-06-10 12:09:07.521381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-06-10 12:09:07.521395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.800 [2024-06-10 12:09:07.521405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.800 [2024-06-10 12:09:07.521579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.800 [2024-06-10 12:09:07.521734] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.800 [2024-06-10 12:09:07.521742] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.800 [2024-06-10 12:09:07.521750] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.800 [2024-06-10 12:09:07.524143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.800 [2024-06-10 12:09:07.533068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.800 [2024-06-10 12:09:07.533627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-06-10 12:09:07.533984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-06-10 12:09:07.533994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.800 [2024-06-10 12:09:07.534002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.800 [2024-06-10 12:09:07.534168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.800 [2024-06-10 12:09:07.534300] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.800 [2024-06-10 12:09:07.534309] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.800 [2024-06-10 12:09:07.534316] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.800 [2024-06-10 12:09:07.536701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.800 [2024-06-10 12:09:07.545738] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.800 [2024-06-10 12:09:07.546009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-06-10 12:09:07.546373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-06-10 12:09:07.546384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.800 [2024-06-10 12:09:07.546391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.800 [2024-06-10 12:09:07.546538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.800 [2024-06-10 12:09:07.546709] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.800 [2024-06-10 12:09:07.546717] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.800 [2024-06-10 12:09:07.546724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.800 [2024-06-10 12:09:07.549028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.800 [2024-06-10 12:09:07.558507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.800 [2024-06-10 12:09:07.559025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-06-10 12:09:07.559409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-06-10 12:09:07.559419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:13.800 [2024-06-10 12:09:07.559426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:13.800 [2024-06-10 12:09:07.559595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:13.800 [2024-06-10 12:09:07.559707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.800 [2024-06-10 12:09:07.559715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.800 [2024-06-10 12:09:07.559722] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.800 [2024-06-10 12:09:07.562040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.064 [2024-06-10 12:09:07.571103] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.064 [2024-06-10 12:09:07.571468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.571713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.571722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.064 [2024-06-10 12:09:07.571730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.064 [2024-06-10 12:09:07.571895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.064 [2024-06-10 12:09:07.572014] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.064 [2024-06-10 12:09:07.572022] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.064 [2024-06-10 12:09:07.572029] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.064 [2024-06-10 12:09:07.574455] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.064 [2024-06-10 12:09:07.583641] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.064 [2024-06-10 12:09:07.584186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.584508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.584518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.064 [2024-06-10 12:09:07.584526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.064 [2024-06-10 12:09:07.584721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.064 [2024-06-10 12:09:07.584792] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.064 [2024-06-10 12:09:07.584799] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.064 [2024-06-10 12:09:07.584805] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.064 [2024-06-10 12:09:07.586899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.064 [2024-06-10 12:09:07.596087] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.064 [2024-06-10 12:09:07.596672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.596968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.596981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.064 [2024-06-10 12:09:07.596991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.064 [2024-06-10 12:09:07.597179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.064 [2024-06-10 12:09:07.597362] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.064 [2024-06-10 12:09:07.597372] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.064 [2024-06-10 12:09:07.597380] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.064 [2024-06-10 12:09:07.599668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.064 [2024-06-10 12:09:07.608449] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.064 [2024-06-10 12:09:07.609028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.609415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.609425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.064 [2024-06-10 12:09:07.609433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.064 [2024-06-10 12:09:07.609586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.064 [2024-06-10 12:09:07.609695] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.064 [2024-06-10 12:09:07.609704] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.064 [2024-06-10 12:09:07.609711] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.064 [2024-06-10 12:09:07.611935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.064 [2024-06-10 12:09:07.621059] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.064 [2024-06-10 12:09:07.621765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.622152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.622165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.064 [2024-06-10 12:09:07.622174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.064 [2024-06-10 12:09:07.622365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.064 [2024-06-10 12:09:07.622497] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.064 [2024-06-10 12:09:07.622505] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.064 [2024-06-10 12:09:07.622512] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.064 [2024-06-10 12:09:07.624798] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.064 [2024-06-10 12:09:07.633878] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.064 [2024-06-10 12:09:07.634411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.634767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.634777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.064 [2024-06-10 12:09:07.634784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.064 [2024-06-10 12:09:07.634892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.064 [2024-06-10 12:09:07.635087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.064 [2024-06-10 12:09:07.635097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.064 [2024-06-10 12:09:07.635104] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.064 [2024-06-10 12:09:07.637472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.064 [2024-06-10 12:09:07.646474] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.064 [2024-06-10 12:09:07.647026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.647221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.647235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.064 [2024-06-10 12:09:07.647248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.064 [2024-06-10 12:09:07.647432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.064 [2024-06-10 12:09:07.647593] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.064 [2024-06-10 12:09:07.647601] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.064 [2024-06-10 12:09:07.647607] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.064 [2024-06-10 12:09:07.649833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.064 [2024-06-10 12:09:07.659159] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.064 [2024-06-10 12:09:07.659544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.659746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.659755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.064 [2024-06-10 12:09:07.659763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.064 [2024-06-10 12:09:07.659912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.064 [2024-06-10 12:09:07.660080] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.064 [2024-06-10 12:09:07.660087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.064 [2024-06-10 12:09:07.660094] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.064 [2024-06-10 12:09:07.662483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.064 [2024-06-10 12:09:07.671659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.064 [2024-06-10 12:09:07.672177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.672358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.064 [2024-06-10 12:09:07.672368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.064 [2024-06-10 12:09:07.672376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.672509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.065 [2024-06-10 12:09:07.672656] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.065 [2024-06-10 12:09:07.672664] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.065 [2024-06-10 12:09:07.672671] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.065 [2024-06-10 12:09:07.674878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.065 [2024-06-10 12:09:07.684348] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.065 [2024-06-10 12:09:07.684702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.685069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.685079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-06-10 12:09:07.685093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.685200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.065 [2024-06-10 12:09:07.685314] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.065 [2024-06-10 12:09:07.685322] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.065 [2024-06-10 12:09:07.685330] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.065 [2024-06-10 12:09:07.687605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.065 [2024-06-10 12:09:07.696877] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.065 [2024-06-10 12:09:07.697477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.697733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.697754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-06-10 12:09:07.697764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.697952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.065 [2024-06-10 12:09:07.698127] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.065 [2024-06-10 12:09:07.698135] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.065 [2024-06-10 12:09:07.698143] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.065 [2024-06-10 12:09:07.700678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.065 [2024-06-10 12:09:07.709594] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.065 [2024-06-10 12:09:07.710234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.710624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.710637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-06-10 12:09:07.710647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.710836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.065 [2024-06-10 12:09:07.711027] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.065 [2024-06-10 12:09:07.711037] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.065 [2024-06-10 12:09:07.711044] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.065 [2024-06-10 12:09:07.713309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.065 [2024-06-10 12:09:07.722372] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.065 [2024-06-10 12:09:07.722933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.723323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.723338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-06-10 12:09:07.723347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.723518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.065 [2024-06-10 12:09:07.723677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.065 [2024-06-10 12:09:07.723686] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.065 [2024-06-10 12:09:07.723694] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.065 [2024-06-10 12:09:07.725920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.065 [2024-06-10 12:09:07.734692] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.065 [2024-06-10 12:09:07.735195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.735602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.735613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-06-10 12:09:07.735621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.735789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.065 [2024-06-10 12:09:07.735966] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.065 [2024-06-10 12:09:07.735974] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.065 [2024-06-10 12:09:07.735981] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.065 [2024-06-10 12:09:07.738397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.065 [2024-06-10 12:09:07.747387] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.065 [2024-06-10 12:09:07.747892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.748050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.748060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-06-10 12:09:07.748067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.748259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.065 [2024-06-10 12:09:07.748446] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.065 [2024-06-10 12:09:07.748455] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.065 [2024-06-10 12:09:07.748462] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.065 [2024-06-10 12:09:07.750784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.065 12:09:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:14.065 12:09:07 -- common/autotest_common.sh@852 -- # return 0 00:31:14.065 12:09:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:14.065 12:09:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:14.065 12:09:07 -- common/autotest_common.sh@10 -- # set +x 00:31:14.065 [2024-06-10 12:09:07.760150] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.065 [2024-06-10 12:09:07.760779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.761041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.761052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-06-10 12:09:07.761064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.761274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.065 [2024-06-10 12:09:07.761443] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.065 [2024-06-10 12:09:07.761450] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.065 [2024-06-10 12:09:07.761457] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.065 [2024-06-10 12:09:07.763690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.065 [2024-06-10 12:09:07.772707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.065 [2024-06-10 12:09:07.773211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.773355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.773365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-06-10 12:09:07.773372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.773521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.065 [2024-06-10 12:09:07.773609] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.065 [2024-06-10 12:09:07.773617] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.065 [2024-06-10 12:09:07.773623] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.065 [2024-06-10 12:09:07.776018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.065 [2024-06-10 12:09:07.785429] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.065 [2024-06-10 12:09:07.785927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.786133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-10 12:09:07.786143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-06-10 12:09:07.786150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.065 [2024-06-10 12:09:07.786320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.066 [2024-06-10 12:09:07.786433] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.066 [2024-06-10 12:09:07.786440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.066 [2024-06-10 12:09:07.786447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.066 [2024-06-10 12:09:07.788890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.066 12:09:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.066 12:09:07 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:14.066 [2024-06-10 12:09:07.797942] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.066 12:09:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.066 12:09:07 -- common/autotest_common.sh@10 -- # set +x 00:31:14.066 [2024-06-10 12:09:07.798580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-10 12:09:07.798719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-10 12:09:07.798742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-06-10 12:09:07.798752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.066 [2024-06-10 12:09:07.798915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.066 [2024-06-10 12:09:07.799010] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.066 [2024-06-10 12:09:07.799018] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.066 [2024-06-10 12:09:07.799025] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.066 [2024-06-10 12:09:07.801111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.066 [2024-06-10 12:09:07.803028] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.066 12:09:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.066 12:09:07 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:14.066 12:09:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.066 12:09:07 -- common/autotest_common.sh@10 -- # set +x 00:31:14.066 [2024-06-10 12:09:07.810424] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.066 [2024-06-10 12:09:07.810933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-10 12:09:07.811291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-10 12:09:07.811302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-06-10 12:09:07.811310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.066 [2024-06-10 12:09:07.811401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.066 [2024-06-10 12:09:07.811550] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.066 [2024-06-10 12:09:07.811558] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.066 [2024-06-10 12:09:07.811565] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.066 [2024-06-10 12:09:07.813968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.066 [2024-06-10 12:09:07.823018] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.066 [2024-06-10 12:09:07.823697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-10 12:09:07.824086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-10 12:09:07.824099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-06-10 12:09:07.824108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.066 [2024-06-10 12:09:07.824296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.066 [2024-06-10 12:09:07.824469] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.066 [2024-06-10 12:09:07.824477] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.066 [2024-06-10 12:09:07.824484] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.066 [2024-06-10 12:09:07.826674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.327 Malloc0 00:31:14.327 12:09:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.327 12:09:07 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:14.327 [2024-06-10 12:09:07.835510] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.327 12:09:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.327 12:09:07 -- common/autotest_common.sh@10 -- # set +x 00:31:14.327 [2024-06-10 12:09:07.836075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-06-10 12:09:07.836460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-06-10 12:09:07.836471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.327 [2024-06-10 12:09:07.836478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.327 [2024-06-10 12:09:07.836623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.327 [2024-06-10 12:09:07.836795] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.327 [2024-06-10 12:09:07.836802] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.327 [2024-06-10 12:09:07.836809] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.327 [2024-06-10 12:09:07.839220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.327 12:09:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.327 12:09:07 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:14.327 12:09:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.327 12:09:07 -- common/autotest_common.sh@10 -- # set +x 00:31:14.327 [2024-06-10 12:09:07.848154] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.327 [2024-06-10 12:09:07.848595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-06-10 12:09:07.848971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-06-10 12:09:07.848984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.327 [2024-06-10 12:09:07.848994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.327 [2024-06-10 12:09:07.849119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.327 [2024-06-10 12:09:07.849299] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.327 [2024-06-10 12:09:07.849308] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.327 [2024-06-10 12:09:07.849316] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.327 [2024-06-10 12:09:07.851699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.327 12:09:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.327 12:09:07 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.327 12:09:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.327 12:09:07 -- common/autotest_common.sh@10 -- # set +x 00:31:14.327 [2024-06-10 12:09:07.860658] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.327 [2024-06-10 12:09:07.861352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-06-10 12:09:07.861555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-06-10 12:09:07.861567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05450 with addr=10.0.0.2, port=4420 00:31:14.327 [2024-06-10 12:09:07.861577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05450 is same with the state(5) to be set 00:31:14.327 [2024-06-10 12:09:07.861764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05450 (9): Bad file descriptor 00:31:14.327 [2024-06-10 12:09:07.861946] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.327 [2024-06-10 12:09:07.861956] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.327 [2024-06-10 12:09:07.861963] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.327 [2024-06-10 12:09:07.862032] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.327 [2024-06-10 12:09:07.864619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.327 12:09:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.327 12:09:07 -- host/bdevperf.sh@38 -- # wait 2148782 00:31:14.327 [2024-06-10 12:09:07.873371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.327 [2024-06-10 12:09:07.911917] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:24.332 00:31:24.332 Latency(us) 00:31:24.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.332 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:24.332 Verification LBA range: start 0x0 length 0x4000 00:31:24.332 Nvme1n1 : 15.00 13972.45 54.58 14077.69 0.00 4547.93 696.32 21408.43 00:31:24.332 =================================================================================================================== 00:31:24.332 Total : 13972.45 54.58 14077.69 0.00 4547.93 696.32 21408.43 00:31:24.332 12:09:16 -- host/bdevperf.sh@39 -- # sync 00:31:24.332 12:09:16 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.332 12:09:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.332 12:09:16 -- common/autotest_common.sh@10 -- # set +x 00:31:24.332 12:09:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.332 12:09:16 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:24.332 12:09:16 -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:24.332 12:09:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:24.332 12:09:16 -- nvmf/common.sh@116 -- # sync 00:31:24.332 12:09:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:24.332 12:09:16 -- nvmf/common.sh@119 -- # set +e 00:31:24.332 12:09:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:24.332 12:09:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:24.332 rmmod nvme_tcp 00:31:24.332 rmmod nvme_fabrics 00:31:24.332 rmmod nvme_keyring 00:31:24.332 12:09:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:24.332 12:09:16 -- nvmf/common.sh@123 -- # set -e 00:31:24.332 12:09:16 -- nvmf/common.sh@124 -- # return 0 00:31:24.332 12:09:16 -- nvmf/common.sh@477 -- # '[' -n 2149875 ']' 00:31:24.332 12:09:16 -- nvmf/common.sh@478 -- # killprocess 2149875 00:31:24.332 12:09:16 -- common/autotest_common.sh@926 -- # '[' -z 2149875 ']' 00:31:24.332 12:09:16 -- common/autotest_common.sh@930 -- # kill -0 2149875 00:31:24.332 12:09:16 -- common/autotest_common.sh@931 -- # uname 00:31:24.332 12:09:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:24.332 12:09:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2149875 00:31:24.332 12:09:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:24.332 12:09:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:24.332 12:09:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2149875' 00:31:24.332 killing process with pid 2149875 00:31:24.332 12:09:16 -- common/autotest_common.sh@945 -- # kill 2149875 00:31:24.332 12:09:16 -- common/autotest_common.sh@950 -- # wait 2149875 00:31:24.332 12:09:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:24.332 12:09:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:24.332 12:09:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:24.332 12:09:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.332 12:09:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:24.332 12:09:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.332 12:09:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.332 12:09:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.274 12:09:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:25.274 00:31:25.274 real 0m27.422s 00:31:25.274 user 1m2.588s 00:31:25.274 sys 0m6.885s 00:31:25.274 12:09:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:25.274 12:09:18 -- common/autotest_common.sh@10 -- # set +x 00:31:25.274 ************************************ 00:31:25.274 END TEST nvmf_bdevperf 00:31:25.274 ************************************ 00:31:25.274 12:09:18 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:25.274 12:09:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:25.274 12:09:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:25.274 12:09:18 -- common/autotest_common.sh@10 -- # set +x 00:31:25.274 ************************************ 00:31:25.274 START TEST nvmf_target_disconnect 00:31:25.274 ************************************ 00:31:25.274 12:09:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:25.274 * Looking for test storage... 00:31:25.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.274 12:09:18 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.274 12:09:18 -- nvmf/common.sh@7 -- # uname -s 00:31:25.274 12:09:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.274 12:09:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.274 12:09:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.274 12:09:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.274 12:09:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.274 12:09:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.274 12:09:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.274 12:09:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.274 12:09:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.274 12:09:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.274 12:09:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:25.274 12:09:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:25.274 12:09:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.274 12:09:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.274 12:09:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.274 12:09:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.274 12:09:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.274 12:09:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.274 12:09:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.274 12:09:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.274 12:09:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.275 12:09:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.275 12:09:18 -- paths/export.sh@5 -- # export PATH 00:31:25.275 12:09:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.275 12:09:18 -- nvmf/common.sh@46 -- # : 0 00:31:25.275 12:09:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:25.275 12:09:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:25.275 12:09:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:25.275 12:09:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.275 12:09:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.275 12:09:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:25.275 12:09:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:25.275 12:09:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:25.275 12:09:18 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:25.275 12:09:18 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:25.275 12:09:18 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:25.275 12:09:18 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:31:25.275 12:09:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:25.275 12:09:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.275 12:09:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:25.275 12:09:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:25.275 12:09:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:25.275 12:09:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.275 12:09:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.275 12:09:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.275 12:09:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:25.275 12:09:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:25.275 12:09:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:25.275 12:09:18 -- common/autotest_common.sh@10 -- # set +x 00:31:33.488 12:09:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:33.488 12:09:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:33.488 12:09:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:33.488 12:09:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:33.488 12:09:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:33.488 12:09:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:33.488 12:09:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:33.488 12:09:25 -- nvmf/common.sh@294 -- # net_devs=() 00:31:33.488 12:09:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:33.488 12:09:25 -- nvmf/common.sh@295 -- # e810=() 00:31:33.488 12:09:25 -- nvmf/common.sh@295 -- # local -ga e810 00:31:33.488 12:09:25 -- nvmf/common.sh@296 -- # x722=() 00:31:33.488 12:09:25 -- nvmf/common.sh@296 -- # local -ga x722 00:31:33.488 12:09:25 -- nvmf/common.sh@297 -- # mlx=() 00:31:33.488 12:09:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:33.488 12:09:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.488 12:09:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:33.488 12:09:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:33.488 12:09:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:33.488 12:09:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:33.488 12:09:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:33.488 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:33.488 12:09:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:33.488 12:09:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:33.488 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:33.488 12:09:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:33.488 12:09:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:33.488 12:09:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.488 12:09:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:33.488 12:09:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.488 12:09:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:33.488 Found net devices under 0000:31:00.0: cvl_0_0 00:31:33.488 12:09:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.488 12:09:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:33.488 12:09:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.488 12:09:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:33.488 12:09:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.488 12:09:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:33.488 Found net devices under 0000:31:00.1: cvl_0_1 00:31:33.488 12:09:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.488 12:09:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:33.488 12:09:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:33.488 12:09:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:33.488 12:09:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:33.488 12:09:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.488 12:09:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.488 12:09:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.488 12:09:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:33.488 12:09:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.488 12:09:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.488 12:09:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:33.488 12:09:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.488 12:09:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.488 12:09:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:33.488 12:09:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:33.488 12:09:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.488 12:09:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.489 12:09:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.489 12:09:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.489 12:09:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:33.489 12:09:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.489 12:09:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.489 12:09:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.489 12:09:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:33.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:31:33.489 00:31:33.489 --- 10.0.0.2 ping statistics --- 00:31:33.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.489 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:31:33.489 12:09:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:31:33.489 00:31:33.489 --- 10.0.0.1 ping statistics --- 00:31:33.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.489 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:31:33.489 12:09:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.489 12:09:26 -- nvmf/common.sh@410 -- # return 0 00:31:33.489 12:09:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:33.489 12:09:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.489 12:09:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:33.489 12:09:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:33.489 12:09:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.489 12:09:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:33.489 12:09:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:33.489 12:09:26 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:33.489 12:09:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:33.489 12:09:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:33.489 12:09:26 -- common/autotest_common.sh@10 -- # set +x 00:31:33.489 ************************************ 00:31:33.489 START TEST nvmf_target_disconnect_tc1 00:31:33.489 ************************************ 00:31:33.489 12:09:26 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:31:33.489 12:09:26 -- host/target_disconnect.sh@32 -- # set +e 00:31:33.489 12:09:26 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:33.489 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.489 [2024-06-10 12:09:26.321862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.489 [2024-06-10 12:09:26.322279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.489 [2024-06-10 12:09:26.322294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x584310 with addr=10.0.0.2, port=4420 00:31:33.489 [2024-06-10 12:09:26.322316] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:33.489 [2024-06-10 12:09:26.322327] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:33.489 [2024-06-10 12:09:26.322335] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:33.489 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:33.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:33.489 Initializing NVMe Controllers 00:31:33.489 12:09:26 -- host/target_disconnect.sh@33 -- # trap - ERR 00:31:33.489 12:09:26 -- host/target_disconnect.sh@33 -- # print_backtrace 00:31:33.489 12:09:26 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:31:33.489 12:09:26 -- common/autotest_common.sh@1132 -- # return 0 00:31:33.489 12:09:26 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:31:33.489 12:09:26 -- host/target_disconnect.sh@41 -- # set -e 00:31:33.489 00:31:33.489 real 0m0.105s 00:31:33.489 user 0m0.039s 00:31:33.489 sys 0m0.064s 00:31:33.489 12:09:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:33.489 12:09:26 -- common/autotest_common.sh@10 -- # set +x 00:31:33.489 ************************************ 00:31:33.489 END TEST nvmf_target_disconnect_tc1 00:31:33.489 ************************************ 00:31:33.489 12:09:26 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:33.489 12:09:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:33.489 12:09:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:33.489 12:09:26 -- common/autotest_common.sh@10 -- # set +x 00:31:33.489 ************************************ 00:31:33.489 START TEST nvmf_target_disconnect_tc2 00:31:33.489 ************************************ 00:31:33.489 12:09:26 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:31:33.489 12:09:26 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:31:33.489 12:09:26 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:33.489 12:09:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:33.489 12:09:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:33.489 12:09:26 -- common/autotest_common.sh@10 -- # set +x 00:31:33.489 12:09:26 -- nvmf/common.sh@469 -- # nvmfpid=2156453 00:31:33.489 12:09:26 -- nvmf/common.sh@470 -- # waitforlisten 2156453 00:31:33.489 12:09:26 -- common/autotest_common.sh@819 -- # '[' -z 2156453 ']' 00:31:33.489 12:09:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:33.489 12:09:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.489 12:09:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:33.489 12:09:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.489 12:09:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:33.489 12:09:26 -- common/autotest_common.sh@10 -- # set +x 00:31:33.489 [2024-06-10 12:09:26.440782] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:33.489 [2024-06-10 12:09:26.440840] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.489 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.489 [2024-06-10 12:09:26.528580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.489 [2024-06-10 12:09:26.620145] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:33.489 [2024-06-10 12:09:26.620311] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.489 [2024-06-10 12:09:26.620321] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.489 [2024-06-10 12:09:26.620329] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.489 [2024-06-10 12:09:26.620809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:33.489 [2024-06-10 12:09:26.621048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:33.489 [2024-06-10 12:09:26.621096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:33.489 [2024-06-10 12:09:26.621098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:33.489 12:09:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:33.489 12:09:27 -- common/autotest_common.sh@852 -- # return 0 00:31:33.489 12:09:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:33.489 12:09:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:33.489 12:09:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.751 12:09:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:33.751 12:09:27 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:33.751 12:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.751 12:09:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.751 Malloc0 00:31:33.751 12:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.751 12:09:27 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:33.751 12:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.751 12:09:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.751 [2024-06-10 12:09:27.305738] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.751 12:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.751 12:09:27 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:33.751 12:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.751 12:09:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.751 12:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.751 12:09:27 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:33.751 12:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.751 12:09:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.751 12:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.751 12:09:27 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.751 12:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.751 12:09:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.751 [2024-06-10 12:09:27.346109] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.751 12:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.751 12:09:27 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:33.751 12:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.751 12:09:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.751 12:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.751 12:09:27 -- host/target_disconnect.sh@50 -- # reconnectpid=2156759 00:31:33.751 12:09:27 -- host/target_disconnect.sh@52 -- # sleep 2 00:31:33.751 12:09:27 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:33.751 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.669 12:09:29 -- host/target_disconnect.sh@53 -- # kill -9 2156453 00:31:35.669 12:09:29 -- host/target_disconnect.sh@55 -- # sleep 2 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Read completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 Write completed with error (sct=0, sc=8) 00:31:35.669 starting I/O failed 00:31:35.669 [2024-06-10 12:09:29.377935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.669 [2024-06-10 12:09:29.378516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.669 [2024-06-10 12:09:29.378914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.378925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.379476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.379792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.379801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.380085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.380558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.380585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.380904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.381128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.381135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.381566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.381970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.381977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.382474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.382781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.382791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.383035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.383285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.383292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.383699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.384072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.384079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.384461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.384708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.384715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.385096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.385341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.385348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.385552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.385795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.385802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.386187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.386512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.386520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.386819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.387207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.387215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.387544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.387892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.387899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.388290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.388663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.388670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.389002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.389310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.389318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.389522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.389856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.389863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.390208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.390574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.390582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.390967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.391246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.391254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.391620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.392008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.392015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.392416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.392764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.392771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.393151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.393479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.393486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.393734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.394084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.394091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.394525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.394874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.394881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.395074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.395436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.395444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.395750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.396115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.396122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.396401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.396760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.396767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.397154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.397499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.397506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.670 [2024-06-10 12:09:29.397761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.398146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.670 [2024-06-10 12:09:29.398153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.670 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.398471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.398828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.398834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.399173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.399473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.399488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.399829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.400180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.400186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.400508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.400865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.400872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.401211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.401573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.401579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.401944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.402267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.402273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.402590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.402966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.402973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.403223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.403556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.403563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.403886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.404238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.404256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.404559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.404907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.404914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.405230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.405604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.405611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.405838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.406208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.406214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.406515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.406852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.406859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.407112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.407464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.407472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.407818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.408147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.408154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.408500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.408687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.408693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.409056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.409401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.409408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.409772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.410118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.410124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.410554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.410890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.410896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.411278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.411625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.411631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.411968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.412318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.412325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.412712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.413057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.413063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.413487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.413707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.413714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.413797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.414139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.414151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.414509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.414890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.414896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.415283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.415647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.415653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.415995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.416362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.416369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.416678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.417005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.417011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.671 qpair failed and we were unable to recover it. 00:31:35.671 [2024-06-10 12:09:29.417372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.417602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.671 [2024-06-10 12:09:29.417609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.418005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.418312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.418318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.418583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.418941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.418948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.419281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.419630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.419637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.419968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.420335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.420341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.420556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.420944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.420952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.421290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.421616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.421622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.421976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.422368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.422374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.422562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.422925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.422931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.423362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.423735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.423741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.424081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.424380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.424387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.424739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.425086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.425093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.425453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.425758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.425765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.426121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.426480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.426487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.426675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.426924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.426930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.427299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.427651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.427658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.428025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.428365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.428372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.428734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.429050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.429056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.429407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.429751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.429757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.430113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.430453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.430460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.430655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.430862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.430869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.431231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.431539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.431546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.431926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.432087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.432095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.432457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.432840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.432846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.433185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.433528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.433535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.433882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.434265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.434272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.434617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.434862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.434868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.435208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.435529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.435535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.435885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.436234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.436240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.672 qpair failed and we were unable to recover it. 00:31:35.672 [2024-06-10 12:09:29.436594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.672 [2024-06-10 12:09:29.436948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.673 [2024-06-10 12:09:29.436954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.673 qpair failed and we were unable to recover it. 00:31:35.673 [2024-06-10 12:09:29.437292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.673 [2024-06-10 12:09:29.437591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.673 [2024-06-10 12:09:29.437597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.673 qpair failed and we were unable to recover it. 00:31:35.673 [2024-06-10 12:09:29.437974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.673 [2024-06-10 12:09:29.438304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.673 [2024-06-10 12:09:29.438311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.673 qpair failed and we were unable to recover it. 00:31:35.673 [2024-06-10 12:09:29.438656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.438999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.439008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.439340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.439695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.439701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.440044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.440377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.440384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.440747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.441092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.441098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.441448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.441809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.441816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.441972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.442335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.442343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.442758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.443138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.443145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.443361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.443757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.443765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.444023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.444413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.444420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.444553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.444879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.444885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.445230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.445500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.445507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.445843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.446223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.446230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.446580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.446920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.446927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.447147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.447547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.447554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.447891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.448263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.448270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.448653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.449041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.449048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.449459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.449802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.449808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.450184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.450537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.450544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.450954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.451292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.451299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.451669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.451893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.451900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.452236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.452589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.452596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.452967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.453313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.453319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.453702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.454085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.454092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.454427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.454801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.454807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.455177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.455534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.455540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.942 qpair failed and we were unable to recover it. 00:31:35.942 [2024-06-10 12:09:29.455956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.456208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.942 [2024-06-10 12:09:29.456215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.456592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.456933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.456940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.457298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.457548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.457555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.457903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.458246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.458252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.458707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.459044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.459050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.459443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.459822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.459829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.460093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.460456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.460463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.460656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.460984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.460990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.461356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.461552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.461559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.461914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.462301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.462308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.462645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.463017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.463023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.463387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.463718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.463724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.464059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.464400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.464407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.464718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.465097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.465103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.465352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.465626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.465633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.466018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.466367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.466373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.466723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.466997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.467004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.467362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.467741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.467748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.468000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.468394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.468400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.468757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.469164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.469171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.469522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.469914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.469920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.470256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.470620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.470626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.470981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.471284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.471291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.471632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.471985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.471991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.472475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.472815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.472822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.473183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.473526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.473533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.473870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.474255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.474262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.474636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.474976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.474982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.943 qpair failed and we were unable to recover it. 00:31:35.943 [2024-06-10 12:09:29.475367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.475716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.943 [2024-06-10 12:09:29.475723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.476079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.476343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.476350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.476711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.477063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.477069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.477409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.477739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.477745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.477971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.478310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.478317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.478655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.479063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.479069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.479401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.479729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.479736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.479947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.480321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.480328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.480678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.481030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.481036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.481416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.481609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.481616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.481972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.482304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.482311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.482686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.483072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.483079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.483442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.483774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.483780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.483993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.484172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.484179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.484539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.484921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.484927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.485303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.485630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.485636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.485996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.486373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.486380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.486715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.487079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.487086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.487447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.487828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.487834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.488174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.488523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.488529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.488897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.489256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.489263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.489621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.489994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.490001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.490315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.490545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.490551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.490763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.491104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.491110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.491490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.491873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.491879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.492219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.492380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.492387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.492719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.493069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.493076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.493433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.493811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.493818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.494167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.494552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.494559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.944 qpair failed and we were unable to recover it. 00:31:35.944 [2024-06-10 12:09:29.494895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.944 [2024-06-10 12:09:29.495245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.495252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.495605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.495888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.495894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.496270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.496629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.496635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.496972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.497188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.497195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.497518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.497857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.497864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.498221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.498603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.498610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.498990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.499176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.499183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.499502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.499851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.499857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.500246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.500475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.500482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.500753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.501140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.501148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.501511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.501853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.501860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.502217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.502597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.502605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.502984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.503265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.503272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.503646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.503990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.503996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.504337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.504706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.504712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.504875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.505128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.505134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.505465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.505826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.505832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.506189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.506547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.506553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.506891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.507259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.507266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.507557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.507929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.507935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.508268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.508631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.508637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.508991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.509220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.509227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.509604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.509848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.509855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.510211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.510577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.510583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.510923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.511309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.511316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.511682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.512004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.512010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.512257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.512625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.512631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.513046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.513384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.513390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.513740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.514116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.945 [2024-06-10 12:09:29.514122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.945 qpair failed and we were unable to recover it. 00:31:35.945 [2024-06-10 12:09:29.514467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.514847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.514853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.515228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.515575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.515582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.515810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.516170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.516176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.516537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.516923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.516933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.517276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.517620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.517626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.518006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.518347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.518353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.518696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.518926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.518933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.519310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.519655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.519661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.520037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.520410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.520417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.520780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.521127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.521135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.521471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.521848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.521854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.522263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.522586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.522592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.522949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.523340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.523347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.523700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.524048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.524057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.524411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.524755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.524762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.525098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.525459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.525468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.525843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.526202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.526209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.526558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.526905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.526911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.527086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.527438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.527444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.527822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.528202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.528209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.528589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.528863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.946 [2024-06-10 12:09:29.528870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.946 qpair failed and we were unable to recover it. 00:31:35.946 [2024-06-10 12:09:29.529130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.529551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.529558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.529898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.530135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.530142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.530465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.530839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.530847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.531201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.531563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.531570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.531845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.532233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.532241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.532507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.532831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.532838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.533198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.533465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.533473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.533739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.534093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.534099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.534325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.534681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.534688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.535098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.535408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.535418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.535793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.536136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.536142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.536516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.536868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.536875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.537238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.537618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.537626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.538007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.538361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.538368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.538723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.539128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.539134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.539489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.539830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.539837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.540103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.540486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.540493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.540891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.541232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.541239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.541656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.542038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.542045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.542576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.542955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.542963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.543453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.543709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.543719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.544084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.544474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.544481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.544795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.545140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.545146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.545582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.545925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.545931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.546316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.546707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.546713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.547051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.547400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.547407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.547782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.548126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.548133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.548504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.548668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.548675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.947 [2024-06-10 12:09:29.549002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.549340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.947 [2024-06-10 12:09:29.549347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.947 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.549659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.550037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.550043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.550404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.550778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.550785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.550977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.551195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.551203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.551557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.551764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.551770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.552130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.552512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.552519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.552865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.553200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.553207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.553449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.553806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.553813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.554149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.554487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.554494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.554841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.555183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.555190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.555534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.555886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.555892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.556241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.556588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.556594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.556957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.557276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.557283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.557519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.557904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.557910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.558260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.558600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.558607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.558800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.559175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.559182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.559479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.559733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.559740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.559996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.560384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.560391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.560681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.561033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.561040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.561422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.561645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.561652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.562014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.562402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.562408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.562758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.563081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.563088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.563450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.563838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.563845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.564180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.564546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.564553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.564945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.565321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.565328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.565690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.566035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.566043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.566406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.566778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.566785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.567021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.567360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.567367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.567732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.568116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.568123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.948 qpair failed and we were unable to recover it. 00:31:35.948 [2024-06-10 12:09:29.568466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.568792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.948 [2024-06-10 12:09:29.568798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.569159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.569489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.569496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.569854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.570195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.570201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.570623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.571006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.571012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.571348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.571666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.571672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.572033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.572272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.572279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.572581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.572999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.573005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.573215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.573556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.573562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.573944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.574331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.574337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.574691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.575063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.575069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.575423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.575604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.575611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.575860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.576092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.576099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.576459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.576690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.576697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.577076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.577418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.577424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.577748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.578128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.578134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.578424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.578802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.578808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.579142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.579494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.579500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.579744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.579948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.579955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.580315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.580707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.580713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.581048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.581425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.581432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.581832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.582170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.582177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.582586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.582935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.582943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.583136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.583516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.583523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.583910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.584257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.584264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.584621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.584963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.584970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.585347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.585719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.585725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.586076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.586455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.586461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.586816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.587190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.587197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.587542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.587894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.587900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.949 qpair failed and we were unable to recover it. 00:31:35.949 [2024-06-10 12:09:29.588276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.949 [2024-06-10 12:09:29.588608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.588615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.588966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.589321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.589329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.589714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.590095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.590102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.590465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.590819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.590826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.591209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.591562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.591569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.591918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.592268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.592274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.592599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.592959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.592965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.593299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.593624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.593632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.593916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.594265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.594272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.594591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.594945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.594952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.595378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.595779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.595786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.596130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.596508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.596515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.596857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.597200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.597208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.597568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.597824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.597831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.598211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.598581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.598589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.598947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.599299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.599307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.599647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.599880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.599887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.600223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.600560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.600569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.600869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.601211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.601218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.601608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.601952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.601959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.602321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.602702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.602708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.603050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.603419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.603425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.603790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.604163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.604169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.604515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.604866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.604872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.605206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.605583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.605590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.605899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.606277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.606285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.606661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.607001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.607007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.607344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.607722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.607728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.608119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.608361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.608367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.950 qpair failed and we were unable to recover it. 00:31:35.950 [2024-06-10 12:09:29.608720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.950 [2024-06-10 12:09:29.609078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.609085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.609421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.609660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.609666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.610024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.610366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.610372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.610750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.611021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.611028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.611389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.611730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.611736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.612040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.612398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.612404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.612777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.613124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.613130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.613501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.613865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.613871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.614210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.614449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.614455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.614789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.615128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.615135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.615516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.615862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.615868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.616245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.616584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.616591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.616964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.617300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.617307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.617700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.618047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.618054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.618468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.618761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.618768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.619095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.619436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.619443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.619756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.620135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.620141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.620398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.620589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.620596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.620962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.621350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.621357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.621685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.621943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.621949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.622282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.622621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.622627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.622962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.623314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.623321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.623690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.624030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.624036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.624459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.624815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.624822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.625092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.625440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.951 [2024-06-10 12:09:29.625447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.951 qpair failed and we were unable to recover it. 00:31:35.951 [2024-06-10 12:09:29.625865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.626206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.626213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.626648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.626982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.626989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.627378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.627698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.627705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.628065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.628402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.628410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.628783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.629028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.629034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.629376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.629735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.629742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.630116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.630559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.630565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.630912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.631275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.631282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.631607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.631952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.631958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.632296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.632627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.632634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.632992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.633350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.633357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.633638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.633950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.633957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.634167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.634506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.634513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.634885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.635321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.635329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.635714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.636131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.636137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.636488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.636867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.636873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.637220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.637586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.637593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.637848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.638230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.638236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.638591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.638887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.638893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.639166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.639319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.639333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.639771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.640139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.640145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.640559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.640888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.640894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.641341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.641729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.641736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.642077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.642307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.642315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.642726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.643075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.643082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.643361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.643529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.643536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.643961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.644368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.644374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.644592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.644983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.644989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.952 [2024-06-10 12:09:29.645253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.645501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.952 [2024-06-10 12:09:29.645508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.952 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.645932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.646186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.646193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.646536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.646880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.646886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.647134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.647390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.647396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.647629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.648005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.648011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.648264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.648630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.648636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.648983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.649297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.649303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.649690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.650038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.650045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.650269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.650646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.650652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.651012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.651284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.651291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.651587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.651828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.651834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.652025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.652325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.652331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.652596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.652862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.652868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.653257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.653683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.653690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.654030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.654133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.654139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.654385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.654783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.654789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.655140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.655345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.655353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.655649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.655906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.655912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.656169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.656503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.656510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.656723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.656982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.656988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.657354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.657613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.657620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.657985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.658376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.658383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.658801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.659194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.659201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.659549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.659896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.659902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.660162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.660505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.660512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.660734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.661069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.661075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.661356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.661609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.661615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.661945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.662266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.662273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.662638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.662877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.662883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.953 qpair failed and we were unable to recover it. 00:31:35.953 [2024-06-10 12:09:29.663255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.953 [2024-06-10 12:09:29.663635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.663641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.663857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.664191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.664197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.664604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.664959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.664966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.665325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.665654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.665660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.666024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.666356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.666362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.666558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.666843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.666849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.667207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.667594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.667602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.667961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.668323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.668329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.668565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.668802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.668808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.669057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.669397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.669404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.669782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.670166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.670173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.670544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.670836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.670843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.671217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.671459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.671466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.671723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.671982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.671989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.672346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.672687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.672693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.673082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.673451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.673457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.673882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.674208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.674215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.674483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.674842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.674849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.675228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.675602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.675609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.675970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.676209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.676217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.676562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.676821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.676828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.677105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.677464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.677471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.677843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.678193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.678200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.678584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.678929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.678936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.679270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.679506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.679512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.679768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.679954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.679960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.680315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.680682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.680688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.681081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.681400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.681406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.954 qpair failed and we were unable to recover it. 00:31:35.954 [2024-06-10 12:09:29.681633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.954 [2024-06-10 12:09:29.681991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.681998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.682216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.682571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.682578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.682847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.683095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.683102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.683466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.683814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.683821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.684206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.684593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.684600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.684864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.685142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.685149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.685519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.685913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.685920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.686269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.686612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.686618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.686853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.687108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.687114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.687469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.687825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.687831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.688180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.688537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.688543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.688907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.689264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.689270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.689553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.689905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.689911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.690270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.690478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.690485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.690834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.691177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.691184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.691592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.691814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.691821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.692035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.692389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.692396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.692733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.693106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.693112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.693477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.693838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.693844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.694185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.694552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.694559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.694906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.695293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.695300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.695657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.696003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.696010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.955 [2024-06-10 12:09:29.696385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.696731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.955 [2024-06-10 12:09:29.696738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.955 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.697083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.697379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.697386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.697766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.698014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.698021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.698407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.698759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.698765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.699107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.699476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.699482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.699849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.700206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.700213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.700667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.700891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.700898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.701209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.701560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.701566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.701917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.702274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.702281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.702669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.703012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.703018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.703444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.703785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.703792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.704132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.704494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.704500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:35.956 [2024-06-10 12:09:29.704854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.705217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.956 [2024-06-10 12:09:29.705223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:35.956 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.705560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.705816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.705823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.706077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.706128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.706135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.706468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.706858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.706865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.707060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.707395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.707402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.707619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.707997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.708003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.708270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.708628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.708634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.708900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.709278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.709284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.709628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.710004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.710010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.710420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.710754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.710760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.710980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.711347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.711353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.711803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.712111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.712117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.712500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.712686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.712693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.712927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.713270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.713277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.713635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.713976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.713982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.714369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.714757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.714764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.715002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.715375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.715381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.715738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.716117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.716123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.716521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.716860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.716867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.717213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.717581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.717587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.717919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.718262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.718269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.718645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.718929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.718935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.719297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.719385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.719391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.719733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.720078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-06-10 12:09:29.720085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-06-10 12:09:29.720171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.720435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.720442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.720828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.721165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.721172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.721393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.721774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.721781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.722159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.722536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.722542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.722920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.723304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.723311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.723580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.723963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.723969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.724304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.724634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.724641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.725001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.725344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.725351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.725694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.725927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.725933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.726322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.726672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.726679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.727036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.727398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.727405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.727820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.728155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.728164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.728411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.728790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.728797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.729156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.729549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.729555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.729813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.730195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.730201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.730581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.730965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.730972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.731329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.731602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.731609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.731966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.732202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.732209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.732558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.732897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.732904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.733261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.733603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.733609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.733954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.734306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.734312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.734650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.734999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.735007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.735362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.735742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.735748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.735989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.736189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.736196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.736517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.736780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.736786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.737165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.737541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.737548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-06-10 12:09:29.737910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.738266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-06-10 12:09:29.738272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.738610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.738971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.738977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.739379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.739706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.739712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.740049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.740417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.740423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.740794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.741171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.741177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.741417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.741688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.741696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.742033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.742278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.742284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.742634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.742979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.742985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.743271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.743506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.743513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.743708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.744090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.744097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.744310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.744541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.744547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.744922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.745265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.745272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.745521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.745875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.745882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.746256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.746489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.746496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.746864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.747212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.747218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.747655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.747997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.748005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.748384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.748744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.748750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.749096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.749468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.749475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.749833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.750218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.750224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.750567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.750856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.750862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.751212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.751656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.751664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.752015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.752450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.752477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.752832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.753166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.753173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.753550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.753900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.753906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.754134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.754470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.754477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-06-10 12:09:29.754825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.755162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-06-10 12:09:29.755169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.755536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.755886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.755892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.756137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.756514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.756521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.756807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.757142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.757149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.757508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.757923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.757930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.758113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.758490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.758497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.758876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.759248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.759255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.759644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.760031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.760037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.760476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.760854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.760863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.761257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.761602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.761608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.761696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.762040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.762046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.762417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.762766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.762773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.763153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.763471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.763479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.763844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.764200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.764207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.764585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.764926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.764933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.765290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.765629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.765636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.766012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.766354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.766361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.766720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.766981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.766987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.767328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.767700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.767706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.768069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.768421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.768428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.768812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.769206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.769213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.769446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.769830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.769837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.770214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.770559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.770565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.770881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.771234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.771240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.771541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.771879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.771885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.772222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.772605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.772612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.773001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.773446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-06-10 12:09:29.773473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-06-10 12:09:29.773825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.774121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.774128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.774468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.774844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.774851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.775213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.775567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.775575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.775837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.776230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.776238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.776609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.776997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.777004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.777386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.777609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.777616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.778007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.778368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.778374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.778620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.779038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.779044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.779400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.779753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.779759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.780092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.780332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.780338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.780750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.781042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.781048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.781376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.781699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.781705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.782048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.782389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.782395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.782760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.783045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.783051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.783405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.783649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.783656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.784006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.784349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.784355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.784726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.785074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.785080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.785421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.785781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.785787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.786131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.786466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.786472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.786809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.787159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.787165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.787525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.787913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.787920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.788298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.788683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.788689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.789024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.789390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.789397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.789731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.790110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.790116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.790460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.790826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.790832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.791053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.791378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.791385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-06-10 12:09:29.791752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-06-10 12:09:29.792137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.792143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.792548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.792867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.792874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.793317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.793625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.793632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.794008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.794350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.794356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.794695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.795012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.795018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.795385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.795739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.795745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.796079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.796322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.796330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.796719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.797103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.797109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.797454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.797817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.797824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.798027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.798382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.798390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.798583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.798833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.798840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.799180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.799528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.799535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.799875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.800252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.800259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.800649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.800999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.801005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.801343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.801724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.801730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.802107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.802453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.802460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.802821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.803185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.803191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.803546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.803696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.803702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.804030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.804418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.804424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.804762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.805110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.805116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.805513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.805893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.805899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-06-10 12:09:29.806238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.806600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-06-10 12:09:29.806606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.806967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.807431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.807458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.807858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.808211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.808217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.808565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.808772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.808780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.809147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.809565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.809573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.809978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.810275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.810282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.810650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.810916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.810922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.811280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.811631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.811637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.812018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.812379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.812386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.812716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.813010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.813016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.813353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.813610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.813616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.813947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.814288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.814294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.814674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.815065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.815073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.815268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.815598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.815604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.815820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.816174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.816181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.816525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.816866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.816872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.817234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.817599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.817606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.817933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.818269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.818276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.818613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.818979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.818985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.819345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.819734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.819741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.820018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.820361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.820368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.820747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.821086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.821093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.821468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.821818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.821825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.822182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.822539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.822546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.822877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.823055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.823063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.823408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.823749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.823756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.824133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.824518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.824524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-06-10 12:09:29.824883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.825164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-06-10 12:09:29.825171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.825525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.825872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.825879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.826233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.826581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.826588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.826944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.827142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.827149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.827455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.827644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.827651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.828005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.828354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.828360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.828694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.828961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.828969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.829334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.829569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.829576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.829987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.830329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.830336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.830674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.831022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.831029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.831401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.831751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.831758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.831953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.832290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.832297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.832629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.832991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.832997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.833344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.833696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.833702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.834040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.834357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.834363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.834728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.835069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.835076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.835425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.835658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.835664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.836003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.836367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.836374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.836715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.837054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.837060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.837472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.837817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.837823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.838160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.838512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.838520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.838874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.839253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.839260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.839658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.839807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.839814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.840150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.840422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.840428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.840780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.841164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.841172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.841598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.841942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.841949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-06-10 12:09:29.842307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.842670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-06-10 12:09:29.842676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.843029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.843389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.843396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.843784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.844124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.844130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.844504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.844783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.844789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.845159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.845515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.845525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.845900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.846245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.846253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.846494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.846886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.846893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.847269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.847649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.847655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.847990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.848336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.848342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.848732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.849075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.849081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.849428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.849806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.849812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.850156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.850480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.850487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.850848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.851233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.851239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.851648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.852029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.852036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.852393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.852738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.852746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.853121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.853428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.853442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.853803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.854146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.854152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.854509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.854836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.854843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.855218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.855550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.855557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.855932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.856319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.856326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.856692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.857035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.857041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.857229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.857610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.857617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.857961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.858267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.858274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.858602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.858984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.858990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.859129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.859456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.859465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.859808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.860156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.860162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.860561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.860880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.860886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.861172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.861552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.861558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-06-10 12:09:29.861891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.862231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-06-10 12:09:29.862238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.862580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.862968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.862975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.863335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.863709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.863715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.864063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.864428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.864435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.864796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.865151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.865157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.865411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.865759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.865765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.866104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.866450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.866457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.866693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.866937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.866944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.867133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.867510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.867517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.867857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.868223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.868229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.868604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.868985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.868991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.869331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.869706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.869712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.869861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.870181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.870188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.870536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.870884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.870891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.871249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.871595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.871601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.871926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.872277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.872284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.872663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.873011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.873017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.873341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.873709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.873715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.874054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.874234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.874241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.874552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.874919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.874925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.875368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.875689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.875696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.876076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.876419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.876425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.876732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.877067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.877073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.877410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.877753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.877760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.877990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.878245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.878253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-06-10 12:09:29.878634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.878901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-06-10 12:09:29.878907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.879215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.879609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.879616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.879998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.880375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.880382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.880597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.880834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.880841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.881194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.881471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.881478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.881801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.882163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.882170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.882534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.882895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.882901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.883272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.883607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.883614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.883972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.884316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.884323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.884696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.884930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.884936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.885287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.885540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.885546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.885891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.886235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.886244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.886562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.886923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.886929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.887275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.887649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.887656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.887909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.888174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.888180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.888518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.888877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.888884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.889231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.889604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.889611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.889846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.890199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.890206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.890559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.890922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.890929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.891156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.891543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.891550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.891920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.892304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.892310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.892636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.893016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.893023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.893402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.893778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.893785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.894146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.894504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.894510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-06-10 12:09:29.894857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.895231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-06-10 12:09:29.895237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.895606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.895931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.895937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.896277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.896504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.896511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.896891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.897148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.897155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.897488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.897839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.897845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.898186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.898367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.898374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.898724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.899078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.899085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.899439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.899662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.899669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.900020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.900368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.900376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.900737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.901106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.901113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.901496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.901845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.901852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.902192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.902529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.902536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.902847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.903241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.903250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.903666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.904011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.904017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.904403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.904803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.904810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.905172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.905572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.905578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.905921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.906131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.906138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.906477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.906786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.906792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.907034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.907403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.907409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.907752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.908000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.908006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.908366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.908762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.908769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.908994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.909307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.909314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.909685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.910023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.910029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.910295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.910656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.910662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.911060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.911116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.911123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.911459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.911843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.911850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.912198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.912568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.912576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-06-10 12:09:29.912931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.913283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-06-10 12:09:29.913289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.913519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.913904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.913911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.914139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.914511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.914518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.914743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.915136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.915142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.915507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.915805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.915811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.916205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.916603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.916609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.916993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.917340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.917347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.917731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.918008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.918015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.918237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.918596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.918604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.918985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.919204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.919210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.919380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.919749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.919756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.920134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.920517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.920523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.920866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.921219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.921225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.921590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.921968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.921974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.922356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.922714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.922721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.923086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.923301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.923308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.923362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.923730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.923737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.924078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.924450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.924457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.924808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.925187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.925193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.925367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.925747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.925753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.926109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.926323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.926329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.926538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.926868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.926874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.927307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.927642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.927648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.927985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.928337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.928344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.928714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.929057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.929063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.929315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.929679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.929685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.930025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.930412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.930419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.930758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.931136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.931142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.931495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.931894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.931901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-06-10 12:09:29.932241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.932597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-06-10 12:09:29.932603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.932859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.933209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.933215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.933555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.933926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.933933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.934295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.934667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.934674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.935052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.935393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.935400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.935613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.935995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.936001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.936341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.936719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.936725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.936950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.937299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.937305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.937643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.937992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.937998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.938210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.938522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.938529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.938906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.939294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.939302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.939649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.939989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.939995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.940340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.940717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.940725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.941014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.941385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.941392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.941735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.941983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.941989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.942346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.942689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.942696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.943031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.943398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.943404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.943694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.944077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.944083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.944432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.944784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.944791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.945201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.945538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.945544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.945940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.946297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.946304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.946664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.946990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.946996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.947210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.947579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.947588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.947922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.948268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.948275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.948639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.948979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.948985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.949342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.949691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.949697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.949875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.950194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.950200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.950539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.950883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.950889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.951251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.951705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.951711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.952060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.952408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.952415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.952775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.953080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.240 [2024-06-10 12:09:29.953087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.240 qpair failed and we were unable to recover it. 00:31:36.240 [2024-06-10 12:09:29.953448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.953781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.953787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.954139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.954486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.954494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.954855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.955256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.955262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.955644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.955881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.955887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.956152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.956429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.956436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.956814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.957043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.957050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.957406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.957747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.957753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.958138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.958497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.958504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.958856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.959213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.959219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.959486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.959683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.959689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.960051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.960406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.960412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.960756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.961075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.961084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.961336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.961615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.961621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.961994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.962335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.962341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.962684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.963044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.963050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.963379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.963769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.963775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.963957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.964293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.964299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.964658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.965052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.965059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.965490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.965827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.965833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.966224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.966429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.966436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.966834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.967186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.967192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.967544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.967882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.967888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.241 qpair failed and we were unable to recover it. 00:31:36.241 [2024-06-10 12:09:29.968266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.241 [2024-06-10 12:09:29.968655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.968661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.968998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.969347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.969354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.969721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.969982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.969988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.970286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.970649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.970656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.971016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.971418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.971424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.971590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.971762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.971775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.972128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.972468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.972475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.972783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.973147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.973153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.973508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.973871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.973877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.974235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.974611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.974617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.974870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.975251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.975258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.975602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.975914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.975921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.976262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.976646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.976652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.976991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.977354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.977360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.977720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.978105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.978111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.978487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.978794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.978801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.979183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.979535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.979541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.979818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.980167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.980173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.980509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.980872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.980879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.981225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.981596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.981602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.981940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.982283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.982291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.982670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.983011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.983017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.983362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.983710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.983716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.983986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.984327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.984335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.984695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.985081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.985087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.985480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.985856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.985862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.986113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.986490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.986497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.986836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.987122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.987128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.987483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.987832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.987839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.242 [2024-06-10 12:09:29.988223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.988572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.242 [2024-06-10 12:09:29.988578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.242 qpair failed and we were unable to recover it. 00:31:36.243 [2024-06-10 12:09:29.988936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.989280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.989288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.989724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.989902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.989909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.990265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.990597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.990603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.990955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.991297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.991304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.991684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.992046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.992052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.992396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.992781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.992788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.993148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.993494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.993501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.993844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.994238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.994246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.994583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.994945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.994951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.995289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.995663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.995669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.996016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.996364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.996370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.996791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.997082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.997088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.997451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.997801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.997807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.998143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.998486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.998493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.998829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.999177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.999183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:29.999417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.999796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:29.999802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.000143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.000494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.000500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.000741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.001124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.001130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.001384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.001654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.001661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.001842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.002230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.002237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.003181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.003448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.003456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.003837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.004189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.004195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.004558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.004947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.004953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.005297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.005660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.005667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.005923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.006163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.006170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.006404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.006798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.006805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-06-10 12:09:30.007249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.007499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-06-10 12:09:30.007505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.007730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.007820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.007826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.008213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.008562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.008569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.008952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.009219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.009226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.009582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.009722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.009729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.009978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.010237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.010248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.010654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.011044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.011051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.011395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.011651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.011658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.011979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.012324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.012331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.012733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.012925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.012931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.013127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.013369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.013376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.013673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.013820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.013826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.014005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.014261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.014267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.014519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.014708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.014714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.015118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.015450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.015457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.015820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.016015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.016022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.016253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.016621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.016627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.016965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.017309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.017316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.017675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.018040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.018046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.018382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.018738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.018744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.019005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.019367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.019374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.019513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.019890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.019896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.020148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.020379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.020386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.020738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.021022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.021029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.021409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.021776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.021783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.022160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.022502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.022508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-06-10 12:09:30.022871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-06-10 12:09:30.023223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.023230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.023491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.023839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.023846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.024204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.024584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.024592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.024967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.025309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.025316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.025685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.026006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.026013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.026355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.026726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.026732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.027056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.027409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.027416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.027761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.028123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.028130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.028488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.028856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.028862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.029203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.029535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.029542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.029912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.030250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.030257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.030603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.031008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.031014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.031344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.031696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.031702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.032046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.032399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.032407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.032770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.033117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.033123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.033399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.033753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.033759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.034073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.034431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.034438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.034776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.035063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.035070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.035425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.035777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.035785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.036170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.036519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.036526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.036940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.037300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.037307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.037556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.037938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.037945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.038287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.038651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.038658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.038903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.039252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-06-10 12:09:30.039259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-06-10 12:09:30.039629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.040013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.040019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.040357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.040743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.040750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.040976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.041227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.041233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.041614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.041995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.042002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.042364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.042756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.042762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.043100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.043295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.043302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.043694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.044052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.044058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.044307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.044680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.044687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.045046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.045395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.045402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.045726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.046099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.046105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.046287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.046617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.046623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.046991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.047368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.047375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.047754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.048137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.048143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.048543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.048896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.048903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.049092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.049466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.049475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.049808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.050074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.050080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.050333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.050647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.050654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.050913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.051253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.051260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.051654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.052001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.052007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.052319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.052514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.052521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.052903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.053260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.053266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.053628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.054042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.054048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.054269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.054606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.054613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.054991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.055322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.055329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.055645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.056032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.056040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.056414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.056782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.056788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.057122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.057478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.057484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-06-10 12:09:30.057871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.058026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-06-10 12:09:30.058033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.058270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.058640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.058647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.059018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.059295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.059302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.059717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.060056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.060063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.060337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.060590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.060597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.060787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.061003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.061009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.061331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.061626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.061632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.061959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.062185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.062194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.062477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.062629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.062635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.062823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.063083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.063090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.063342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.063767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.063773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.064136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.064492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.064498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.064855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.065260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.065266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.065403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.065764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.065770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.066148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.066502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.066508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.066773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.067137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.067144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.067401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.067755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.067762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.067990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.068376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.068384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.068511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.068895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.068901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.069246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.069589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.069595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.069840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.070196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.070202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.070561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.070915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.070922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.071203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.071568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.071575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.071915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.072267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.072273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-06-10 12:09:30.072610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.072990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-06-10 12:09:30.072996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.073337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.073714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.073721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.073973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.074252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.074260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.074657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.075006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.075012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.075350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.075568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.075574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.075942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.076294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.076301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.076472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.076844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.076850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.077231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.077601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.077608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.078002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.078336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.078343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.078705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.079049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.079056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.079401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.079779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.079785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.080152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.080467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.080474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.080816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.081169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.081175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.081543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.081898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.081904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.082117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.082468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.082475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.082816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.083171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.083177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.083549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.083832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.083838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.084019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.084340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.084347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.084688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.084967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.084974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.085274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.085397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.085403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-06-10 12:09:30.085742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.086093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-06-10 12:09:30.086099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.086449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.086799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.086806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.087028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.087377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.087384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.087747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.088093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.088099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.088350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.088710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.088717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.089048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.089473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.089480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.089711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.089915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.089922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.090291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.090647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.090653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.090990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.091256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.091263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.091596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.091820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.091827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.092086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.092436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.092442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.092673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.093032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.093039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.093297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.093653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.093659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.094077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.094469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.094475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.094813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.095141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.095147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.095504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.095833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.095840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.096223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.096569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.096575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.096951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.097311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.097317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.097710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.098057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.098063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.098413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.098660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.098667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.099002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.099446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.099452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.099670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.100023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.100029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.100373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.100745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.100751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.101104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.101445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.101451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.101799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.102175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.102182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.102388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.102661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.102667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.519 [2024-06-10 12:09:30.103019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.103257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.519 [2024-06-10 12:09:30.103265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.519 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.103591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.103934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.103940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.104172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.104516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.104522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.104858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.105257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.105263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.105650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.105997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.106003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.106356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.106568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.106574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.106928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.107296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.107303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.107632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.107978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.107985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.108372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.108715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.108721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.108940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.109348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.109355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.109704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.109966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.109972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.110313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.110540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.110547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.110868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.111222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.111229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.111380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.111731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.111737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.111995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.112225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.112232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.112584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.112808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.112822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.113088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.113476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.113483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.113693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.114040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.114046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.114290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.114637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.114643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.114981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.115339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.115346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.115731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.115965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.115972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.116329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.116719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.116725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.117063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.117307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.117314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.117717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.118072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.118080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.118442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.118784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.118791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.119147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.119359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.119366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.119725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.120076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.120082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.120481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.120846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.520 [2024-06-10 12:09:30.120852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.520 qpair failed and we were unable to recover it. 00:31:36.520 [2024-06-10 12:09:30.121068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.121408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.121415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.121871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.122206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.122214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.122451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.122796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.122803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.123167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.123526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.123533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.123874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.124233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.124240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.124611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.124955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.124962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.125340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.125597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.125604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.125965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.126309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.126316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.126546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.126849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.126856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.127214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.127587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.127594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.127825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.128201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.128207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.128390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.128704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.128710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.129065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.129398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.129404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.129774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.130122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.130128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.130382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.130767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.130773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.131104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.131377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.131384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.131723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.132064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.132070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.132332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.132694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.132700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.133052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.133424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.133434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.133692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.134082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.134100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.134498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.134857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.134863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.135102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.135469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.135475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.135818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.136143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.136149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.136510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.136860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.136867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.137212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.137471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.137477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.137825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.137891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.137898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.138105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.138490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.521 [2024-06-10 12:09:30.138496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.521 qpair failed and we were unable to recover it. 00:31:36.521 [2024-06-10 12:09:30.138818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.139137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.139143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.139553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.139917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.139923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.140268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.140477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.140484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.140857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.141247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.141254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.141577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.141964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.141970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.142310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.142552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.142559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.142924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.143314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.143320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.143739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.144082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.144088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.144456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.144780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.144786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.145167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.145527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.145533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.145767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.146136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.146142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.146549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.146900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.146907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.147266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.147635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.147641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.148065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.148396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.148403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.148805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.149108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.149114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.149467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.149815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.149821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.150003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.150341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.150348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.150613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.150844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.150850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.151218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.151409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.151416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.151780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.152139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.152146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.152528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.152889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.152896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.153220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.153467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.153474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.153817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.154188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.154195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.154597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.154947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.154956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.155220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.155556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.155564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.522 [2024-06-10 12:09:30.155774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.156150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.522 [2024-06-10 12:09:30.156157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.522 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.156513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.156868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.156875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.157089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.157427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.157433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.157692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.157947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.157953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.158291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.158690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.158696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.158941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.159293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.159308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.159666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.160038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.160044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.160269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.160480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.160487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.160760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.161096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.161104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.161463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.161748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.161754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.162111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.162460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.162466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.162813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.163187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.163194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.163549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.163913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.163920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.164127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.164366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.164373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.164722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.165084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.165091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.165304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.165646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.165653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.165991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.166340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.166346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.166730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.167100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.167107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.167461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.167808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.167817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.168178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.168571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.168577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.168774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.168968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.168975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-06-10 12:09:30.169274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.169674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-06-10 12:09:30.169681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.170027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.170258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.170266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.170625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.170976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.170982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.171324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.171668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.171675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.172026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.172339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.172347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.172672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.172893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.172899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.173258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.173468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.173475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.173846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.174230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.174238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.174399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.174813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.174819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.175175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.175477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.175484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.175837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.176183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.176190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.176523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.176856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.176862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.177209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.177575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.177582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.177910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.178061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.178069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.178333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.178715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.178722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.179020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.179392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.179399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.179755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.179988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.179994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.180337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.180719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.180726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.181062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.181437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.181444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.181825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.182166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.182172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.182311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.182569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.182576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.182958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.183270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.183277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.183612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.183924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.183931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.184315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.184657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.184663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.185012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.185360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.185366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.185672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.186052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-06-10 12:09:30.186058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-06-10 12:09:30.186410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.186754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.186761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.187029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.187418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.187425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.187807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.188167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.188174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.188508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.188900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.188907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.189286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.189646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.189652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.189990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.190348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.190355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.190549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.190880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.190888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.191216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.191568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.191575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.191910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.192291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.192298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.192671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.192897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.192904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.193239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.193445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.193453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.193757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.194121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.194128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.194508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.194863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.194869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.195213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.195497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.195504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.195842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.196200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.196207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.196474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.196804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.196810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.197118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.197437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.197443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.197833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.198220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.198227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.198632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.198974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.198981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.199345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.199612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.199618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.199992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.200352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.200358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.200697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.200870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.200877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.201235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.201484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.201491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.201857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.202080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.202087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.202454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.202796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.202802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.203188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.203537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.203544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-06-10 12:09:30.203872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-06-10 12:09:30.204260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.204266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.204432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.204859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.204865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.205203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.205492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.205498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.205917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.206257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.206264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.206620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.206965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.206971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.207354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.207554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.207561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.207886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.208237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.208249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.208618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.208836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.208843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.209263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.209631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.209637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.209977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.210343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.210349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.210671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.211042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.211048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.211385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.211771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.211778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.212125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.212517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.212523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.212818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.213183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.213189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.213609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.213846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.213853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.214075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.214415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.214422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.214767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.215112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.215119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.215473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.215830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.215836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.216231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.216612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.216618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.216797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.217116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.217122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.217478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.217841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.217848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.217987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.218318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.218325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.218627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.218969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.218976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.219361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.219711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.219717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.220057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.220376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.220383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.220739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.221115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.221121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-06-10 12:09:30.221466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.221829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-06-10 12:09:30.221836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.222215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.222560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.222566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.222944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.223289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.223296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.223656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.224023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.224030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.224408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.224753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.224759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.225135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.225496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.225502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.225839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.226169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.226176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.226547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.226887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.226893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.227261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.227589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.227597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.227974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.228315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.228321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.228674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.229024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.229031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.229327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.229717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.229724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.230100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.230483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.230491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.230733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.231117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.231124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.231472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.231813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.231820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.232199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.232552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.232558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.232906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.233246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.233253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.233621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.233983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.233990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.234508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.234887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.234896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.235307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.235625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.235631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.235926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.236264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.236271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.236659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.237004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.237011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.237232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.237602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.237609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.238018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.238320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.238326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.238687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.238889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.238897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.239196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.239555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.239561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.239936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.240277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.240284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.240599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.240971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.240978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.241338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.241709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.241716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-06-10 12:09:30.242082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-06-10 12:09:30.242423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.242430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.242795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.243144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.243151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.243512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.243851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.243857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.244210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.244556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.244564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.244939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.245326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.245332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.245683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.246053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.246059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.246212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.246543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.246550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.246908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.247251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.247258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.247512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.247726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.247734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.248089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.248434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.248441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.248829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.249152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.249158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.249535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.249901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.249908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.250328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.250624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.250630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.251099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.251460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.251467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.251818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.252059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.252066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.252426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.252776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.252783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.252969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.253293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.253300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.253656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.254043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.254050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.254398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.254738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.254745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.255085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.255425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.255433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.255816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.256000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-06-10 12:09:30.256007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-06-10 12:09:30.256320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.256674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.256682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.256919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.257167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.257174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.257534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.257876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.257882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.258293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.258648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.258655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.259005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.259361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.259368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.259700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.260089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.260096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.260452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.260808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.260814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.261055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.261411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.261418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.261760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.262105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.262111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.262510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.262861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.262868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.263207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.263444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.263453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.263819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.264162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.264169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.264600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.264948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.264955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.265331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.265584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.265591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.265887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.266264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.266271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.266600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.266950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.266956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.267292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.267627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.267633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.268054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.268405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.268412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.268625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.269008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.269014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.269351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.269697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.269704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.270088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.270446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.270455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.270821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.271177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.271184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.271531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.271912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.271918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.272262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.272391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.272398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.272784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.273124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.273131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.273484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.273857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-06-10 12:09:30.273863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-06-10 12:09:30.274238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-06-10 12:09:30.274596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-06-10 12:09:30.274603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-06-10 12:09:30.274941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-06-10 12:09:30.275300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-06-10 12:09:30.275307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-06-10 12:09:30.275603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-06-10 12:09:30.275984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-06-10 12:09:30.275990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.800 [2024-06-10 12:09:30.276353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-06-10 12:09:30.276710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-06-10 12:09:30.276716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-06-10 12:09:30.277099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-06-10 12:09:30.277398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-06-10 12:09:30.277407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-06-10 12:09:30.277741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-06-10 12:09:30.278075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-06-10 12:09:30.278081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-06-10 12:09:30.278422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-06-10 12:09:30.278634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-06-10 12:09:30.278640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-06-10 12:09:30.278984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-06-10 12:09:30.279346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.279353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.279673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.280042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.280049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.280428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.280805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.280811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.281083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.281317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.281323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.281640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.281838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.281846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.282194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.282545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.282551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.282928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.283271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.283278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.283599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.283954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.283961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.284337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.284716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.284723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.285079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.285442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.285449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.285817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.286204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.286211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.286510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.286850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.286858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.287234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.287610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.287617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.288032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.288489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.288517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.288897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.289250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.289258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.289640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.289852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.289859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.290212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.290558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.290564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.290950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.291337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.291344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.291690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.292048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.292055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.292406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.292792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.292798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.293175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.293448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.293455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.293813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.294047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.294055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.294408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.294762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.294768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.295060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.295444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.295451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.295791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.296222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.296229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.296588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.296878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.296886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.297152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.297503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.297510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.297845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.298163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.298170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-06-10 12:09:30.298415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-06-10 12:09:30.298756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.298763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.299104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.299486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.299494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.299871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.300213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.300219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.300478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.300859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.300865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.301211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.301300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.301309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.301665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.302007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.302014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.302447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.302690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.302696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.303058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.303406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.303413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.303777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.304155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.304161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.304453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.305065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.305079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.305470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.305830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.305838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.306204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.306593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.306600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.306828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.307192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.307199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.307547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.307891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.307899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.308282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.308644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.308650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.308992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.309369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.309376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.309783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.310199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.310206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.310564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.310936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.310942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.311181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.311522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.311530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.311870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.312217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.312223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.312586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.312963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.312970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.313336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.313717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.313724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.314103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.314382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.314389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.314762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.315120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.315126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.315465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.315821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.315827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.316189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.316551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.316558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.316896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.317228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.317247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.317530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.317891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.317898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-06-10 12:09:30.318275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.318634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-06-10 12:09:30.318640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.319001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.319396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.319402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.319777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.320146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.320153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.320359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.320809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.320815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.321156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.321399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.321406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.321880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.322219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.322225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.322580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.322842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.322848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.323193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.323547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.323553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.323766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.324049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.324055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.324388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.324771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.324778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.325042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.325415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.325423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.325788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.326148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.326154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.326503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.326735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.326741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.327156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.327513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.327520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.327872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.328102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.328108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.328495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.328886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.328893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.329241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.329591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.329598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.329957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.330300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.330307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.330653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.331009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.331016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.331409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.331784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.331792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.332174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.332420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.332426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.332821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.333013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.333021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.333418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.333794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.333801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.334180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.334517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.334524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.334891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.335124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.335130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.335508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.335864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.335871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.336257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.336632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.336639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.337018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.337345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.337352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.337746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.338090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.338097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-06-10 12:09:30.338463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-06-10 12:09:30.338767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.338774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.339143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.339324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.339331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.339673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.340034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.340040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.340450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.340837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.340843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.341203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.341572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.341579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.342010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.342329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.342336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.342699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.343026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.343033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.343396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.343621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.343628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.343986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.344355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.344361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.344732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.345052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.345058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.345461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.345837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.345843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.346232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.346674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.346680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.347039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.347388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.347395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.347739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.348119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.348125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.348475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.348841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.348847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.349005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.349245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.349252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.349682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.350038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.350044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.350590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.350856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.350866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.351239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.351596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.351603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.351850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.352208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.352214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.352586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.352844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.352850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.353241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.353674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.353681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.353910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.354272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.354279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.354646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.355039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.355046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.355407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.355797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.355803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.356173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.356532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.356539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.356908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.357156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.357163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.357518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.357912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.357918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.358289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.358664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-06-10 12:09:30.358670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-06-10 12:09:30.359027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.359402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.359408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.359870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.360248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.360256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.360469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.360756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.360763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.361122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.361481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.361488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.361832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.362185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.362191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.362577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.362943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.362949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.363301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.363646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.363653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.364022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.364366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.364373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.364596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.364952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.364958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.365211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.365597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.365603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.365887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.366236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.366247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.366670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.367086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.367092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.367253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.367525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.367532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.367898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.368260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.368267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.368652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.369046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.369054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.369353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.369746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.369752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.370107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.370466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.370472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.370823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.371229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.371235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.371516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.371905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.371912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.372308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.372685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.372692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.373062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.373453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.373460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.373887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.374210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.374217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.374578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.374944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.374951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-06-10 12:09:30.375206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-06-10 12:09:30.375563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.375570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.375917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.376227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.376236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.376599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.376940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.376947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.377457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.377899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.377909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.378155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.378516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.378523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.378920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.379266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.379273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.379685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.379925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.379932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.380321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.380646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.380654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.381055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.381479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.381486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.381768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.382010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.382018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.382327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.382735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.382742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.382986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.383334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.383345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.383587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.383930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.383937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.384172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.384411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.384418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.384776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.385082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.385089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.385305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.385670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.385677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.385896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.386248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.386255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.386597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.386869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.386876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.387235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.387515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.387523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.387888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.388271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.388279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.388520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.388878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.388885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.389240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.389618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.389627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.389988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.390225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.390232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.390567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.390941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.390948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.391209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.391585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.391592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.391926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.392275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.392283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.392662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.393025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.393032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.393350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.393627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.393634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.394000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.394375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-06-10 12:09:30.394383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-06-10 12:09:30.394604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.394949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.394957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.395330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.395696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.395702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.395959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.396330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.396336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.396682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.396919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.396925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.397289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.397719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.397725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.398017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.398254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.398261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.398511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.398891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.398898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.399143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.399469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.399476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.399831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.400195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.400202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.400409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.400792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.400799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.401021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.401350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.401357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.401683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.401991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.401998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.402402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.402830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.402837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.403173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.403387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.403394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.403802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.404045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.404051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.404422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.404808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.404815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.405172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.405382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.405388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.405765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.406081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.406087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.406458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.406685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.406693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.406937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.407293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.407300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.407519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.407903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.407911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.408130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.408463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.408470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.408808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.409171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.409177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.409528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.409874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.409881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.410080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.410354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.410361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.410587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.410950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.410957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.411174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.411485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.411494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.411722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.411961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.411969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.412352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.412729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-06-10 12:09:30.412736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-06-10 12:09:30.413041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.413407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.413413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.413672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.413926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.413933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.414264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.414668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.414675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.415042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.415391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.415397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.415734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.415974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.415980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.416381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.416735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.416742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.417141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.417512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.417518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.417857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.418201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.418208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.418578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.418937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.418943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.419178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.419530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.419537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.419797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.420151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.420157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.420522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.420905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.420911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.421259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.421595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.421602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.421991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.422375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.422381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.422752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.423030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.423036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.423528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.423759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.423765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.424021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.424348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.424354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.424721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.425020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.425027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.425264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.425487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.425493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.425736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.426108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.426114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.426468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.426824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.426830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.427169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.427415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.427421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.427770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.427963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.427969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.428316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.428545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.428552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.428915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.429151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.429159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.429520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.429741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.429747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.430105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.430354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.430361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.430559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.430905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.430912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-06-10 12:09:30.431339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.431700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-06-10 12:09:30.431706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.431918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.432295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.432302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.432543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.432816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.432824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.433206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.433556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.433563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.433902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.434210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.434217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.434574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.434810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.434816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.435066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.435385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.435392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.435461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.435811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.435817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.436170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.436396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.436402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.436797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.437140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.437147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.437504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.437870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.437877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.438096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.438449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.438456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.438887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.439072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.439078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.439407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.439756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.439763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.440110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.440462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.440469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.440816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.441258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.441265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.441586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.441813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.441820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.442160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.442493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.442500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.442857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.443112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.443119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.443463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.443789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.443796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.444122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.444474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.444480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.444819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.445205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.445212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.445553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.445919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.445926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.446359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.446662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.446669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.447006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.447387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.447394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.447778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.448158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.448164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.448413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.448774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.448782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-06-10 12:09:30.449118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.449461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-06-10 12:09:30.449468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.449656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.449998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.450005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.450279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.450659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.450666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.451047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.451405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.451411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.451742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.452001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.452007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.452287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.452642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.452648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.453015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.453360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.453368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.453707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.454068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.454074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.454425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.454813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.454819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.455159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.455631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.455638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.455983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.456335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.456342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.456687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.457001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.457007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.457426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.458279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.458296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.458526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.458868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.458875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.459213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.459578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.459585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.459924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.460311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.460317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.460709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.461046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.461052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.461399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.461778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.461784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.462130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.462356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.462364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.462708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.463094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.463101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.463443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.463811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.463817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.464175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.464527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.464533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.464878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.465240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.465251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.465621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.465872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.465878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.466266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.466592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.466599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.466933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.467173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.467180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.467552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.467935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.467942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-06-10 12:09:30.468347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-06-10 12:09:30.468724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.468730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.469122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.469475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.469482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.469696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.470058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.470065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.470420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.470806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.470812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.471018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.471360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.471372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.471732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.472118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.472125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.472507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.472736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.472742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.473033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.473407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.473414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.473801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.474150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.474157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.474504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.474888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.474894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.475184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.475529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.475537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.475916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.476119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.476126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.476504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.476875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.476884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.477019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.477329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.477335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.477585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.477823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.477829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.478209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.478571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.478578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.478913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.479298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.479305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.479643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.480015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.480022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.480236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.480472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.480480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.480767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.481132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.481139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.481507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.481860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.481867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.482231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.482616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.482623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.482970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.483301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.483309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.483655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.484003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.484010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.484371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.484616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.484623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.484992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.485344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.485350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.485565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.485932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.485939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.486293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.486641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.486647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-06-10 12:09:30.486984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.487314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-06-10 12:09:30.487321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.487700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.487954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.487961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.488205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.488593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.488601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.488987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.489335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.489341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.489712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.490100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.490108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.490459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.490836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.490842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.491097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.491342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.491349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.491603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.491961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.491968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.492337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.492677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.492683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.493024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.493268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.493275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.493551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.493901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.493908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.494249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.494515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.494521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.494705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.495018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.495025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.495461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.495671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.495677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.496050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.496273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.496282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.496668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.497017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.497024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.497411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.497769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.497775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.498175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.498518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.498524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.498861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.499139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.499146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.499530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.499846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.499853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.499996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.500371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.500378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.500740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.500945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.500953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.501299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.501509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.501517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.501878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.502215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.502222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.502585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.502955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.502962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.503340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.503619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.503625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.504006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.504253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.504260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.504605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.504947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.504954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.505308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.505649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-06-10 12:09:30.505656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-06-10 12:09:30.506033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.506253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.506260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.506611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.506998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.507005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.507370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.507722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.507728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.508029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.508329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.508336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.508655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.508955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.508962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.509329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.509678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.509686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.510033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.510273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.510280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.510645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.511017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.511024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.511387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.511736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.511742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.512041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.512368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.512375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.512720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.513066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.513073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.513445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.513822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.513828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.514206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.514483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.514490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.514868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.515228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.515235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.515464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.515784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.515791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.516154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.516514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.516520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.516883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.517125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.517131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.517345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.517640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.517646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.517999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.518345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.518352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.518597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.518909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.518915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.519236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.519612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.519619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.519968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.520139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.520145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.520636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.520980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.520986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.521336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.521684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.521691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.522044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.522477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.522484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.522827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.523218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.523224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.523605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.523984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.523991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.524349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.524739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.524745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-06-10 12:09:30.525092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.525324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-06-10 12:09:30.525331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.525625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.526036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.526042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.526417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.526749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.526756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.526992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.527346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.527352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.527589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.527900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.527906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.528126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.528495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.528502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.528847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.529161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.529167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.529594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.529861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.529867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.530213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.530476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.530482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.530889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.531165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.531172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.531498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.531876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.531883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.532195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.532571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.532579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.532920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.533342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.533349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.533734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.534088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.534095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.534350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.534717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.534724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.534987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.535379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.535385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.535746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.536006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.536012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.536373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.536810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.536816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.537157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.537545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.537552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.537819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.538157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.538164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.538547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.538938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.538944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.539309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.539688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.539695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.540079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.540393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.540400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.540776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.541042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.541048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.541417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.541770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.541777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-06-10 12:09:30.542137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.542515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-06-10 12:09:30.542521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.542866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.543217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.543223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.543509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.543769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.543776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.544000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.544358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.544365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.544631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.544888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.544894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.545202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.545556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.545562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.545901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.546254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.546261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.546466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.546822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.546828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.547126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.547441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.547448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.547817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.548193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.548200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.548445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.548834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.548841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.549046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.549280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.549287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.549724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.550082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.550088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.550487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.550891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.550897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.551235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.551611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.551617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.551961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.552158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.552164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.552534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.552901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.552907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.553235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.553493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.553500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.553839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.554201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.554207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.554562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.554811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.554818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.555173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.555520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.555526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.555941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.556314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.556321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.556660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.557042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.557048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.557413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.557610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.557617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.558024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.558464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.558471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.558808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.559173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.559179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.559553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.559934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.559940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.560324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.560606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.560614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.561019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.561367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.561373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-06-10 12:09:30.561740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.562106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-06-10 12:09:30.562113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.816 [2024-06-10 12:09:30.562453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-06-10 12:09:30.562804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-06-10 12:09:30.562811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.563212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.563548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.563555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.563969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.564209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.564217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.564617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.565002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.565010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.565364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.565754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.565761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.566111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.566357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.566364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.566720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.566977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.566984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.567368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.567798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.567804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.568168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.568520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.568526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.568854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.569106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.569112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.569462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.569847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.569853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.570231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.570607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.570613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.570897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.571234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.571241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.571630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.571980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.571986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.572345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.572581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.572588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.092 qpair failed and we were unable to recover it. 00:31:37.092 [2024-06-10 12:09:30.572903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.092 [2024-06-10 12:09:30.573267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.573273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.573632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.573990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.573996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.574364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.574745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.574751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.575004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.575344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.575355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.575848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.576151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.576157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.576357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.576695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.576701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.576946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.577317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.577323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.577672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.578033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.578040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.578371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.578725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.578732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.579089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.579330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.579336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.579727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.580077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.580083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.580339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.580694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.580701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.581145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.581514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.581521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.581892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.582267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.582273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.582619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.582865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.582870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.583219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.583578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.583585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.583907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.584251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.584258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.584620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.584856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.584862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.585175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.585412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.585420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.585802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.586112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.586117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.586478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.586815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.586822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.587186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.587527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.587534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.587915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.588253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.588259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.588667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.589047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.589054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.589367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.589622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.589628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.589961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.590287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.590293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.590640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.590972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.590978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.591245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.591403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.591410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.591875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.592296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.093 [2024-06-10 12:09:30.592304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.093 qpair failed and we were unable to recover it. 00:31:37.093 [2024-06-10 12:09:30.592688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.592914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.592920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.593253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.593695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.593701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.594075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.594403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.594409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.594748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.595112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.595118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.595470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.595871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.595877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.596155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.596572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.596578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.596920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.597280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.597286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.597588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.597821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.597827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.598178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.598426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.598433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.598760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.599018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.599026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.599438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.599791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.599798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.600163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.600305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.600312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.600566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.600934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.600941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.601123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.601364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.601371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.601724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.602106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.602114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.602414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.602750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.602757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.603103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.603383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.603390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.603815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.604182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.604188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.604356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.604745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.604752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.605166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.605446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.605454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.605817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.606160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.606167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.606456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.606839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.606846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.607294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.607537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.607544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.607892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.608237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.608248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.608502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.608877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.608885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.609250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.609487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.609494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.609923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.610274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.610282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.610671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.611030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.611037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.094 [2024-06-10 12:09:30.611393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.611773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.094 [2024-06-10 12:09:30.611780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.094 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.612057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.612457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.612464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.612886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.613090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.613097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.613363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.613687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.613695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.614097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.614410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.614416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.614803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.615125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.615131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.615389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.615755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.615763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.616162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.616375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.616383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.616636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.617021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.617028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.617379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.617626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.617633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.617974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.618356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.618363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.618734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.619132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.619139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.619503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.619865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.619872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.620197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.620585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.620592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.620949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.621319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.621327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.621731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.621987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.621994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.622209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.622663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.622670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.622908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.623223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.623229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.623612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.624024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.624031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.624378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.624723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.624730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.625075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.625449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.625456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.625824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.626163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.626169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.626616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.626810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.626817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.627131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.627377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.627384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.627766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.628137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.628143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.628592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.628994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.629000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.629340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.629696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.629705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.629986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.630357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.630365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.630698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.630949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.630957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.095 [2024-06-10 12:09:30.631307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.631635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.095 [2024-06-10 12:09:30.631644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.095 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.632022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.632248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.632256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.632452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.632600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.632608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.632978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.633388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.633396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.633517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.633950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.633959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.634328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.634722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.634730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.634981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.635215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.635224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.635574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.635961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.635970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.636306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.636597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.636606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.636970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.637203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.637212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.637499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.637885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.637893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.638277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.638548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.638557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.638921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.639140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.639149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.639295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.639561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.639569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.639909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.640091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.640100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.640320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.640816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.640825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.641198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.641439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.641447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.641724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.641982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.641990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.642379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.642614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.642623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.642960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.643320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.643328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.643678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.643997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.644006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.644351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.644750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.644759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.645092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.645437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.645447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.645702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.646073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.646081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.646284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.646623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.646631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.646989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.647383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.647391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.647631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.647961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.647969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.096 qpair failed and we were unable to recover it. 00:31:37.096 [2024-06-10 12:09:30.648122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.096 [2024-06-10 12:09:30.648482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.648490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.648681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.649035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.649043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.649402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.649754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.649762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.650130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.650472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.650480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.650832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.651187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.651196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.651558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.651928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.651937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.652200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.652644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.652651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.653013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.653333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.653342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.653711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.654022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.654029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.654400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.654733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.654740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.655167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.655511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.655518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.655878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.656113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.656121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.656457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.656824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.656832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.657104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.657337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.657344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.657651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.658004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.658011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.658144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.658547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.658555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.658777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.659135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.659142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.659410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.659646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.659653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.660046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.660459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.660468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.660840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.661204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.661211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.661570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.661962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.661969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.662372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.662741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.662748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.663126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.663368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.663376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.663744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.664000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.664007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.664329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.664519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.664528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-06-10 12:09:30.664734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-06-10 12:09:30.665079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.665087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.665463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.665845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.665854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.666180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.666567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.666574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.666840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.667175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.667182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.667546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.667912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.667919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.668143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.668518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.668526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.668661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.668979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.668986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.669348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.669618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.669625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.669994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.670358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.670366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.670591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.670976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.670984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.671338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.671675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.671683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.672038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.672271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.672279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.672646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.672972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.672979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.673258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.673593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.673601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.673984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.674254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.674263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.674621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.675239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.675261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.675501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.675896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.675904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.676272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.676515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.676522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.676705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.677037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.677045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.677203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.677336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.677344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.677714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.678116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.678124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.678467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.678814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.678822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.679203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.679519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.679528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.679866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.680097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.680105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.680459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.680833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.680840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.681201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.681645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.681652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.681998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.682337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.682344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.682710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.682795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.682802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.683176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.683535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-06-10 12:09:30.683541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-06-10 12:09:30.683906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.684218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.684225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.684647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.684985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.684991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.685269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.685599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.685607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.685923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.686263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.686270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.686666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.687007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.687015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.687399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.687761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.687768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.688107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.688484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.688491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.688851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.689208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.689214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.689613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.690020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.690027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.690365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.690733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.690740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.691008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.691360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.691367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.691709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.692053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.692059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.692421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.692764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.692773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.693114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.693464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.693472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.693747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.693972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.693978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.694150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.694509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.694516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.694857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.695093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.695099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.695466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.695588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.695595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.695953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.696284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.696296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.696725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.697068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.697074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.697415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.697762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.697769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.698136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.698563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.698569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.698959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.699301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.699309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.699706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.700044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.700050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.700414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.700796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.700802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.701034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.701365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.701372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.701795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.702130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.702137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.702514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.702899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.702906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-06-10 12:09:30.703304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.703640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-06-10 12:09:30.703646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.703982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.704345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.704351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.704690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.705069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.705076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.705441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.705813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.705819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.706161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.706491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.706501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.706886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.707265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.707271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.707481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.707837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.707843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.708216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.708539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.708546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.708892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.709233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.709239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.709421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.709833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.709840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.710084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.710363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.710369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.710755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.711100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.711106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.711291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.711558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.711565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.711933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.712280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.712286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.712618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.712963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.712969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.713364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.713725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.713731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.714087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.714451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.714457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.714799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.715146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.715153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.715486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.715861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.715867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.716205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.716526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.716533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.716888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.717237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.717248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.717466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.717811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.717817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.718181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.718525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.718533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.718881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.719258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.719265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.719602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.720024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.720031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.720361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.720713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.720720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.721052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.721413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.721419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.721768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.722001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.722007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.722198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.722518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.722525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-06-10 12:09:30.722858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.723236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-06-10 12:09:30.723247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.723496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.723835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.723841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.724186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.724519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.724525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.724796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.725143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.725150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.725537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.725910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.725917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.726269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.726587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.726593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.726986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.727329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.727336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.727706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.728074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.728080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.728418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.728772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.728778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.729052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.729354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.729361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.729801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.730133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.730140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.730519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.730748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.730756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.731109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.731504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.731511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.731889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.732235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.732245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.732588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.732952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.732959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.733314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.733551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.733559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.733937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.734187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.734194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.734492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.734797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.734803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.735157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.735488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.735495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.735839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.736199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.736205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-06-10 12:09:30.736633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.736959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-06-10 12:09:30.736965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.737329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.737651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.737658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.737988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.738285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.738291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.738621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.738982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.738988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.739371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.739739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.739746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.740109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.740461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.740468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.740804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.741151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.741157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.741514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.741852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.741858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.742057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.742297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.742305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.742543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.742898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.742905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.743280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.743611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.743617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.743973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.744326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.744333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.744670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.745039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.745045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.745369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.745730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.745736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.746070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.746402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.746410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.746774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.747208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.747213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.747555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.747917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.747923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.748281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.748511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.748518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.748855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.749119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.749125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.749453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.749798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.749804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.750201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.750527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.750534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.750705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.751054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.751060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.751474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.751707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.751714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.752076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.752326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.752333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.752706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.753102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.753108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.753314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.753675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.753681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.754028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.754267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.754274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.754524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.754814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.754821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.755186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.755555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.755561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-06-10 12:09:30.755918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-06-10 12:09:30.756300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.756306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.756663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.757014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.757020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.757397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.757579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.757586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.757932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.758236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.758256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.758600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.758940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.758946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.759325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.759700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.759708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.760071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.760446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.760453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.760802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.761184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.761191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.761534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.761912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.761919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.762255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.762610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.762616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.762960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.763322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.763329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.763604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.763926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.763932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.764267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.764545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.764551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.764927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.765273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.765280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.765690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.766039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.766046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.766304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.766687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.766694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.767049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.767392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.767399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.767758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.768141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.768148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.768466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.768843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.768849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.769185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.769532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.769538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.769901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.770241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.770252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.770575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.770915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.770921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.771275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.771433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.771440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.771831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.772175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.772182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.772531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.772708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.772715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.773061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.773250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.773257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-06-10 12:09:30.773612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-06-10 12:09:30.773961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.773967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.774324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.774702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.774709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.775112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.775499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.775506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.775887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.776234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.776240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.776471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.776811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.776818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.777200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.777594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.777600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.777912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.778253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.778259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.778604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.778985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.778993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.779285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.779640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.779646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.780024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.780232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.780239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.780520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.780882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.780890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.781270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.781595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.781601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.781945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.782324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.782331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.782548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.782920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.782926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.783260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.783637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.783643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.784006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.784370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.784376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.784717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.785062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.785068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.785430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.785758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.785765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.785959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.786325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.786332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.786683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.787030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.787037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.787485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.787824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.787830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.788181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.788533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.788540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.788725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.789042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.789049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.789431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.789776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.789782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.790170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.790496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.790502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.790881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.791239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.791249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.791467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.791819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.791826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.792228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.792648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.792655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-06-10 12:09:30.792946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.793282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-06-10 12:09:30.793288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.793698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.794076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.794083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.794335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.794614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.794620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.795000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.795361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.795369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.795638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.796010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.796017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.796361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.796750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.796757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.797092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.797431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.797438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.797814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.798160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.798167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.798469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.798838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.798844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.799185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.799549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.799556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.799885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.800250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.800257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.800502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.800868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.800875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.801219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.801564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.801571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.801932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.802276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.802285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.802669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.803003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.803009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.803335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.803689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.803695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.803907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.804253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.804259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.804602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.804952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.804958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.805322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.805588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.805594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.805856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.806178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.806185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.806544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.806902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.806908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.807165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.807513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.807520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.807874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.808253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.808259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.808599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.808975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.808982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.809200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.809454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.809461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.809901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.810278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.810286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.810608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.810947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-06-10 12:09:30.810953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-06-10 12:09:30.811291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.811733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.811740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.812048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.812402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.812409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.812763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.813086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.813094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.813463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.813847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.813853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.814198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.814568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.814575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.814928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.815315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.815322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.815666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.816041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.816049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.816411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.816766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.816772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.817071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.817451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.817458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.817839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.818190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.818197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.818561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.818910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.818916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.819251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.819602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.819609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.819990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.820453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.820481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.820860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.821224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.821231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.821582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.821965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.821972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.822464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.822843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.822852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.823219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.823599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.823606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.823918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.824266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.824274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.824581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.824921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.824927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.825193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.825584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.825591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.825929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.826281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.826288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.826658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.827035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.827041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.827384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.827760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.827766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-06-10 12:09:30.828016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-06-10 12:09:30.828365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.828373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.828623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.828997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.829003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.829338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.829719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.829726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.830098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.830458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.830465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.830725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.831110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.831116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.831508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.831871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.831878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.832277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.832637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.832644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.832982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.833328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.833336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.833586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.833880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.833887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.834251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.834625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.834631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.834993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.835338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.835345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.835704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.836056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.836062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.836437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.836755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.836762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.837188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.837531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.837537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.837876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.838210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.838217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.838574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.838915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.838923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.839283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.839625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.839632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.840010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.840263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.840269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.840617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.840961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.840967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.841312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.841653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.841659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.842019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.842428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.842435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.842767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.843160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.843167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.843523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.843869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.843875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.844239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.844589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.844595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.844932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.845296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.845303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.845657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.845998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.846004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.846362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.846710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.846716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-06-10 12:09:30.847103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.847439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-06-10 12:09:30.847448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.108 [2024-06-10 12:09:30.847806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-06-10 12:09:30.848128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-06-10 12:09:30.848134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-06-10 12:09:30.848477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.378 [2024-06-10 12:09:30.848724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.848732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.849072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.849354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.849362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.849607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.849739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.849747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.850101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.850419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.850427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.850778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.851093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.851099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.851333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.851605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.851611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.851996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.852324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.852330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.852580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.853018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.853025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.853407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.853748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.853755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.854106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.854420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.854427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.854678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.855021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.855027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.855290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.855544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.855550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.855820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.856165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.856171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.856480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.856823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.856830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.857173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.857461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.857468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.857617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.857915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.857921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.858249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.858671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.858678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.859034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.859369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.859376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.859747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.860122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.860129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.860455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.860847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.860854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.861228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.861594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.861602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.861963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.862312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.862319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.862669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.862945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.862951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.863175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.863546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.863553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.863742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.863986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.863993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.864240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.864595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.864601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.864854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.865216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.865222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.865619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.865860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.865866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.379 qpair failed and we were unable to recover it. 00:31:37.379 [2024-06-10 12:09:30.866215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.379 [2024-06-10 12:09:30.866578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.866585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.866926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.867295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.867301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.867700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.868100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.868106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.868499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.868845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.868851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.869074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.869278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.869285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.869546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.869934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.869940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.870325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.870581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.870587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.870954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.871153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.871160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.871475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.871823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.871830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.872082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.872450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.872456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.872798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.873172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.873178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.873518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.873899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.873905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.874090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.874357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.874364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.874785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.875128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.875135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.875492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.875820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.875827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.876189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.876556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.876563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.876854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.877266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.877272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.877524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.877913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.877919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.878264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.878585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.878592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.878957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.879317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.879324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.879727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.880172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.880178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.880571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.880803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.880809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.881085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.881421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.881427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.881794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.882166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.882172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.882517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.882903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.882910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.883332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.883591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.883597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.883966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.884192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.884199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.884569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.884931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.884939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.885333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.885526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.885534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.380 qpair failed and we were unable to recover it. 00:31:37.380 [2024-06-10 12:09:30.885946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.380 [2024-06-10 12:09:30.886155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.886162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.886525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.886904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.886911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.887249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.887449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.887456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.887836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.888072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.888078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.888422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.888817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.888824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.889212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.889570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.889578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.889942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.890283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.890289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.890640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.890998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.891004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.891419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.891805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.891811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.892237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.892652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.892658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.893017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.893374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.893381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.893606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.894081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.894088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.894455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.894806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.894813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.895156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.895519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.895527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.895609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.895969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.895976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.896332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.896596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.896602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.896669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.896996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.897003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.897254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.897595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.897602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.897940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.898279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.898288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.898573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.898870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.898876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.899253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.899512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.899518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.899953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.900388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.900395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.900748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.901097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.901104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.901465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.901721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.901727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.902092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.902340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.902346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.902745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.903030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.903036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.903355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.903748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.903754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.903962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.904156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.904163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-06-10 12:09:30.904421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.904807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.381 [2024-06-10 12:09:30.904815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.905073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.905352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.905359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.905745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.906089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.906096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.906375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.906726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.906733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.907106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.907464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.907471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.907713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.908062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.908069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.908321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.908645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.908651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.908990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.909364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.909371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.909639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.909890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.909897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.910275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.910624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.910631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.910858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.911143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.911150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.911409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.911686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.911692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.912036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.912403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.912409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.912599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.912840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.912848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.913198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.913564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.913571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.913909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.914113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.914120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.914501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.914859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.914865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.915219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.915399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.915405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.915697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.916044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.916051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.916411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.916763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.916769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.917008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.917385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.917394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.917740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.918003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.918009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.918394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.918742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.918749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.919100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.919328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.919335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.919555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.919922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.919929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.920071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.920451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.920458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.920828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.921182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.921189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.921545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.921877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.921884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.922231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.922606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.922615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-06-10 12:09:30.922994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.382 [2024-06-10 12:09:30.923341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.923347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.923537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.923789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.923796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.924188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.924411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.924417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.924661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.925052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.925058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.925356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.925703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.925709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.926075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.926454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.926460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.926850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.927099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.927105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.927514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.927758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.927764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.927939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.928277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.928283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.928628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.928976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.928982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.929362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.929752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.929759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.930141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.930461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.930468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.930848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.931074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.931088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.931449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.931783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.931789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.932140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.932494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.932501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.932763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.933155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.933161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.933377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.933790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.933796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.934144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.934500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.934506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.934870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.935237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.935247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.935586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.935961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.935967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.936318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.936491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.936498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.936848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.937197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.937204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-06-10 12:09:30.937536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.937622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.383 [2024-06-10 12:09:30.937628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.937980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.938340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.938347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.938566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.938800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.938806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.939153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.939516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.939522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.939710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.940036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.940042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.940433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.940801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.940807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.941152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.941569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.941576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.941907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.942260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.942267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.942624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.942914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.942921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.943287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.943644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.943650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.944018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.944360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.944367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.944731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.944990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.944997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.945336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.945604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.945610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.945944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.946312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.946319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.946674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.946927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.946933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.947258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.947614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.947620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.947862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.948113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.948119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.948349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.948691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.948699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.949034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.949377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.949384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.949565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.949935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.949943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.950327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.950683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.950690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.951053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.951412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.951419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.951774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.952148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.952154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.952491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.952832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.952840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.953251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.953621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.953627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.953954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.954331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.954337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.954594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.954972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.954980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.955340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.955575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.955582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.955840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.956218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.956224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.384 qpair failed and we were unable to recover it. 00:31:37.384 [2024-06-10 12:09:30.956633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.956977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.384 [2024-06-10 12:09:30.956983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.957327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.957709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.957715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.958051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.958419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.958426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.958689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.959028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.959035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.959371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.959529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.959535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.959854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.960198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.960206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.960569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.960900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.960907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.961148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.961476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.961482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.961822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.962175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.962183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.962536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.962820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.962827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.963182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.963527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.963534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.963912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.964267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.964275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.964596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.964933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.964940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.965279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.965612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.965619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.965882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.966246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.966253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.966682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.967021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.967027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.967386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.967767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.967773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.968126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.968498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.968505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.968840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.969145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.969152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.969427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.969767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.969773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.970111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.970467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.970475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.970822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.971205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.971211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.971554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.971918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.971924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.972266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.972589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.972597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.972968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.973325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.973332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.973701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.973929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.973935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.974270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.974626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.974632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.974968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.975301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.975308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.975653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.976003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.976009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.385 qpair failed and we were unable to recover it. 00:31:37.385 [2024-06-10 12:09:30.976422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.385 [2024-06-10 12:09:30.976741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.976747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.976951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.977181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.977189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.977568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.977918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.977924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.978263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.978646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.978652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.979018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.979218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.979225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.979585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.979887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.979893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.980235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.980596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.980603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.980815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.981156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.981162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.981579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.981871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.981877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.982158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.982551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.982557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.982892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.983214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.983221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.983471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.983687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.983694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.984059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.984450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.984456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.984792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.985199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.985205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.985537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.985903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.985910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.986238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.986623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.986629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.986963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.987298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.987305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.987548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.987897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.987903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.988232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.988605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.988611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.988792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.989109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.989115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.989451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.989833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.989840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.990175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.990646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.990653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.990983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.991335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.991347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.991709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.992053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.992059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.992405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.992745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.992753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.993109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.993331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.993338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.993690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.994077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.994084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.994430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.994806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.994813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.995175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.995410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.995417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.386 qpair failed and we were unable to recover it. 00:31:37.386 [2024-06-10 12:09:30.995752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.386 [2024-06-10 12:09:30.995987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.995993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:30.996337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.996690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.996697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:30.996926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.997314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.997321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:30.997677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.998025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.998031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:30.998418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.998605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.998612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:30.998834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.999224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.999231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:30.999613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.999875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:30.999881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.000141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.000526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.000533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.000876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.001232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.001239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.001597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.001981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.001987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.002265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.002631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.002638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.003018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.003409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.003416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.003743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.004128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.004134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.004488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.004842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.004851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.005067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.005450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.005457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.005811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.006194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.006201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.006561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.006958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.006965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.007303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.007588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.007594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.007943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.008327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.008333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.008677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.009047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.009054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.009306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.009662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.009669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.010007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.010154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.010161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.010522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.010861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.010868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.011106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.011431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.011439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.011777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.012118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.012125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.012482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.012697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.012705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.013068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.013367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.013373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.013709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.014056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.014063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.014356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.014715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.387 [2024-06-10 12:09:31.014722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.387 qpair failed and we were unable to recover it. 00:31:37.387 [2024-06-10 12:09:31.015058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.015419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.015426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.015663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.016042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.016049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.016262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.016629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.016635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.016996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.017364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.017371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.017632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.017994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.018002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.018360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.018701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.018708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.019047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.019287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.019293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.019638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.019978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.019984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.020330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.020673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.020681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.021040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.021419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.021426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.021797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.022162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.022168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.022438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.022865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.022872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.023205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.023391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.023398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.023773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.024120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.024126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.024384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.024752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.024760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.025088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.025432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.025438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.025694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.026109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.026115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.026363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.026713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.026720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.026959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.027329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.027335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.027507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.027871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.027877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.028259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.028461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.028468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.028777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.029068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.029075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.029287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.029679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.029686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.030022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.030404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.388 [2024-06-10 12:09:31.030411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.388 qpair failed and we were unable to recover it. 00:31:37.388 [2024-06-10 12:09:31.030780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.031005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.031012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.031368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.031605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.031612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.032004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.032395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.032402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.032806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.033161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.033168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.033531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.033877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.033883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.034227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.034677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.034684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.035025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.035373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.035379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.035722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.036072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.036078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.036470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.036819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.036826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.037183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.037571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.037578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.037722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.038079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.038085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.038423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.038696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.038703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.039065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.039427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.039433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.039788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.040129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.040135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.040399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.040593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.040600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.040989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.041330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.041336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.041528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.041850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.041857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.042219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.042649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.042656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.042993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.043369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.043376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.043691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.044051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.044057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.044437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.044784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.044791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.045164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.045502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.045509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.045862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.046200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.046206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.046554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.046934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.046940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.047276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.047619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.047625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.047884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.048264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.048270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.048530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.048908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.048914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.049250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.049524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.049531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.389 [2024-06-10 12:09:31.049910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.050257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.389 [2024-06-10 12:09:31.050263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.389 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.050592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.050935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.050942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.051200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.051560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.051567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.051909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.052137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.052144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.052499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.052846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.052852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.053201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.053465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.053471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.053819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.054016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.054023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.054354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.054746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.054753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.055108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.055442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.055448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.055783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.056128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.056134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.056488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.056836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.056843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.057199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.057537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.057544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.057878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.058224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.058231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.058588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.058924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.058930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.059258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.059624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.059630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.060016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.060405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.060411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.060763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.061105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.061111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.061371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.061753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.061760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.062146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.062520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.062526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.062889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.063136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.063142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.063503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.063552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.063559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.063945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.064341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.064348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.064710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.065091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.065098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.065463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.065848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.065854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.066188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.066443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.066449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.066805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.067150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.067156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.067505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.067853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.067859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.068238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.068577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.068584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.068846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.069093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.069099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.390 qpair failed and we were unable to recover it. 00:31:37.390 [2024-06-10 12:09:31.069386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.069754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.390 [2024-06-10 12:09:31.069760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.070138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.070406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.070413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.070679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.071029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.071036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.071471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.071804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.071810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.072168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.072527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.072534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.072869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.073251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.073257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.073608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.073986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.073993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.074387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.074733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.074740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.075079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.075422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.075429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.075829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.076170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.076176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.076505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.076854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.076860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.077221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.077558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.077564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.077904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.078094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.078102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.078367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.078715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.078721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.079060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.079322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.079329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.079698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.079876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.079883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.080295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.080671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.080677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.080904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.081223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.081229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.081575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.081953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.081960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.082309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.082666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.082672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.082940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.083328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.083335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.083585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.083931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.083937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.084272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.084640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.084646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.084995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.085360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.085366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.085700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.086042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.086049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.086384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.086725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.086731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.087116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.087465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.087472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.087815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.088063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.088070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.088328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.088584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.088591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.391 qpair failed and we were unable to recover it. 00:31:37.391 [2024-06-10 12:09:31.088942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.391 [2024-06-10 12:09:31.089335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.089341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.089709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.090052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.090058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.090405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.090725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.090731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.091066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.091476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.091482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.091870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.092258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.092265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.092684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.093034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.093040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.093377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.093642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.093649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.093889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.094111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.094118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.094278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.094639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.094645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.094959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.095078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.095084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.095487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.095837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.095843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.096180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.096542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.096548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.096916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.097122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.097129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.097469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.097812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.097819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.098198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.098448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.098454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.098831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.099173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.099179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.099536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.099918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.099924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.100301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.100518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.100525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.100884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.101233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.101240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.101602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.101938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.101944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.102288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.102558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.102565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.102928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.103311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.103317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.103663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.104007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.104013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.104351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.104741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.104747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.392 qpair failed and we were unable to recover it. 00:31:37.392 [2024-06-10 12:09:31.105195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.392 [2024-06-10 12:09:31.105374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.105381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.105754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.106132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.106139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.106508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.106846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.106853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.107230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.107490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.107497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.107850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.108232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.108238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.108618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.108962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.108969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.109306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.109399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.109406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.109627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.110009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.110015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.110394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.110742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.110749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.111104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.111452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.111459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.111811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.112152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.112158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.112517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.112900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.112909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.113290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.113704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.113711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.113982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.114373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.114380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.114751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.115091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.115098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.115310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.115704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.115710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.116051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.116369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.116376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.116714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.117100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.117107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.117463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.117826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.117832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.118011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.118256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.118263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.118601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.118790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.118797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.119179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.119481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.119489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.119862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.120121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.120128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.120417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.120763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.120770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.121136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.121509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.121516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.121875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.122218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.122224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.122592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.122943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.122949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.123186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.123548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.123555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.123898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.124276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.393 [2024-06-10 12:09:31.124283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.393 qpair failed and we were unable to recover it. 00:31:37.393 [2024-06-10 12:09:31.124621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.124961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.124968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.125325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.125666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.125672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.126054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.126236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.126246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.126622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.126894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.126901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.127259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.127461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.127468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.127934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.128273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.128279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.128634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.128973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.128980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.129369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.129712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.129718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.130069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.130408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.130415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.130759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.131055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.131061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.131330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.131690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.131697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.131977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.132348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.132354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.132596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.132980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.132988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.133338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.133600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.133606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.134059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.134415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.134422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.134761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.135154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.135161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.135499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.135844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.135850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.136199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.136547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.136554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.136902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.137299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.137306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.137642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.137994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.138000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.138340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.138676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.138682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.139089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.139444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.139451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.139835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.140183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.140189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.394 [2024-06-10 12:09:31.140551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.140935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.394 [2024-06-10 12:09:31.140941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.394 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.141312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.141680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.141688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.142009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.142385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.142391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.142638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.143002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.143008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.143361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.143753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.143760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.143905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.144251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.144257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.144602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.144779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.144786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.145125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.145453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.145460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.145816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.146158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.146164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.146489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.146860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.146867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.147205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.147621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.147627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.147922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.148335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.148342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.148608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.148969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.148975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.149231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.149593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.149600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.149939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.150319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.150325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.150676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.151023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.151030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.151407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.151798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.151804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.152146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.152483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.152489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.152827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.153168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.665 [2024-06-10 12:09:31.153174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.665 qpair failed and we were unable to recover it. 00:31:37.665 [2024-06-10 12:09:31.153438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.153748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.153754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.154137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.154551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.154558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.154912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.155261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.155268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.155622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.155972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.155978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.156314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.156664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.156670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.157013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.157377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.157383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.157471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.157883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.157889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.158235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.158581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.158588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.158637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.158973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.158980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.159323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.159588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.159594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.159914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.160307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.160313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.160657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.161000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.161006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.161340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.161685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.161692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.162057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.162398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.162405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.162709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.163060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.163066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.163430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.163775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.163781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.164012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.164229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.164237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.164591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.164972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.164979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.165336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.165723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.165730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.166074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.166294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.166301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.166625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.166959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.166965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.167311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.167660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.167667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.167889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.168234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.168241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.168625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.168968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.168974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.169308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.169649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.169655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.169996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.170216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.170223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.170468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.170843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.170849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.171194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.171540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.171547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.666 [2024-06-10 12:09:31.171790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.172166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.666 [2024-06-10 12:09:31.172172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.666 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.172524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.172909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.172916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.173261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.173578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.173584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.173920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.174346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.174353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.174714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.175103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.175110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.175493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.175924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.175930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.176188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.176428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.176435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.176810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.177008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.177016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.177282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.177660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.177666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.178008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.178292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.178299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.178671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.179019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.179026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.179356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.179736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.179742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.180074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.180456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.180463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.180795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.181142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.181149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.181498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.181882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.181889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.182266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.182629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.182636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.182992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.183345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.183351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.183589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.183916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.183922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.184282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.184650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.184656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.184874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.185234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.185241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.185622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.185961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.185968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.186335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.186682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.186689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.186946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.187291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.187297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.187554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.187928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.187934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.188299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.188558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.188564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.188916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.189256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.189263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.189588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.189985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.189991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.190331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.190676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.190682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.191061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.191277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.191284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.667 qpair failed and we were unable to recover it. 00:31:37.667 [2024-06-10 12:09:31.191682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.192025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.667 [2024-06-10 12:09:31.192031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.192359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.192737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.192743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.193115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.193391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.193397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.193747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.194094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.194100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.194439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.194788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.194794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.195137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.195398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.195404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.195785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.196170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.196177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.196535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.196880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.196886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.197223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.197471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.197478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.197839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.198179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.198185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.198539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.198881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.198887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.199231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.199642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.199649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.199907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.200273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.200280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.200629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.200978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.200985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.201325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.201692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.201698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.202057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.202335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.202342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.202666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.203010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.203017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.203374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.203592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.203600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.203929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.204313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.204320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.204675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.205016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.205022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.205360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.205720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.205726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.205910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.206280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.206287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.206643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.206993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.206999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.207335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.207659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.207665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.208008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.208352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.208359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.208772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.209139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.209147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.209499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.209840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.209846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.210113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.210459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.210465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.210802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.211051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.211057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.668 qpair failed and we were unable to recover it. 00:31:37.668 [2024-06-10 12:09:31.211396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.668 [2024-06-10 12:09:31.211760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.211767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.212149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.212501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.212508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.212870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.213215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.213221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.213634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.213977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.213983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.214318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.214716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.214722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.214963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.215236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.215247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.215593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.215949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.215955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.216292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.216642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.216649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.216938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.217151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.217158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.217511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.217856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.217862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.218098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.218462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.218468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.218803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.219159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.219165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.219511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.219851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.219857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.220202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.220458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.220465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.220718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.221060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.221067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.221433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.221783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.221791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.222131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.222526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.222533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.222870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.223219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.223225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.223527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.223929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.223935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.224279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.224647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.224653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.224912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.225166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.225173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.225434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.225697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.225704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.225919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.226277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.226284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.226593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.226935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.226941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.669 qpair failed and we were unable to recover it. 00:31:37.669 [2024-06-10 12:09:31.227275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.669 [2024-06-10 12:09:31.227611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.227617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.227955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.228381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.228389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.228729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.229070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.229077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.229331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.229709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.229715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.230050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.230444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.230451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.230794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.231135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.231141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.231513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.231864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.231870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.232210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.232551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.232559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.232906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.233292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.233299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.233641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.233984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.233990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.234327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.234668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.234674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.235017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.235357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.235365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.235729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.236095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.236101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.236359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.236706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.236713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.237061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.237401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.237408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.237758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.238103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.238109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.238445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.238633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.238640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.239004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.239387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.239394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.239756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.240144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.240151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.240500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.240844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.240850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.241211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.241557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.241563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.241902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.242147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.242153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.242250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.242583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.242589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.242930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.243278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.243284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.243631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.243981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.243987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.244371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.244716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.244722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.245062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.245323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.245329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.245552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.245930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.245936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.670 qpair failed and we were unable to recover it. 00:31:37.670 [2024-06-10 12:09:31.246278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.670 [2024-06-10 12:09:31.246613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.246620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.247003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.247348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.247355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.247741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.247933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.247940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.248302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.248650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.248656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.248992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.249210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.249216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.249578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.249943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.249949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.250289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.250638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.250644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.250980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.251274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.251280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.251528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.251876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.251882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.252226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.252579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.252586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.253002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.253328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.253335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.253707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.254049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.254055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.254298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.254612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.254618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.254869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.255215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.255221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.255598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.255818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.255825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.256189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.256532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.256539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.256839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.257189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.257195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.257610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.257917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.257923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.258102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.258419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.258425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.258805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.259126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.259132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.259371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.259763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.259769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.260103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.260443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.260449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.260785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.261045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.261052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.261325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.261650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.261657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.262017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.262364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.262370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.262717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.263062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.263068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.263363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.263723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.263729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.264066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.264434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.264440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.264777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.265164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.265170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.671 qpair failed and we were unable to recover it. 00:31:37.671 [2024-06-10 12:09:31.265520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.671 [2024-06-10 12:09:31.265859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.265866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.266205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.266559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.266566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.266910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.267251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.267258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.267604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.267957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.267964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.268251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.268593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.268600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.268935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.269276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.269283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.269636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.269830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.269836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.270108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.270478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.270485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.270823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.271163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.271169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.271516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.271771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.271778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.272152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.272511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.272518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.272804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.273105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.273111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.273447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.273813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.273819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.274180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.274529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.274536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.274872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.275214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.275221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.275577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.275954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.275961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.276336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.276651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.276658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.276993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.277332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.277339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.277709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.278055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.278061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.278315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.278665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.278672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.279035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.279381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.279388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.279742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.280125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.280132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.280545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.280798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.280804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.281061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.281257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.281265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.281606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.281951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.281957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.282327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.282676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.282683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.283020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.283363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.283370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.283739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.284081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.284087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.284424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.284810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.284817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.672 qpair failed and we were unable to recover it. 00:31:37.672 [2024-06-10 12:09:31.285079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.672 [2024-06-10 12:09:31.285461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.285467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.285723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.286094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.286100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.286444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.286784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.286790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.287127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.287479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.287486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.287845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.288227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.288234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.288594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.288933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.288939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.289456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.289836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.289845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.290188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.290545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.290552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.290968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.291309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.291316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.291694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.292039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.292046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.292409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.292602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.292616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.292973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.293234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.293240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.293456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.293818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.293824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.294006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.294326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.294333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.294695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.295039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.295045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.295414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.295638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.295644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.296003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.296338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.296344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.296682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.297033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.297039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.297403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.297790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.297797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.298150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.298512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.298519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.298852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.299199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.299205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.299556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.299899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.299905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.300257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.300587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.300593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.300932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.301313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.301320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.301654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.301897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.301903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.302125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.302474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.302480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.302819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.303165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.303171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.303509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.303894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.303900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.304227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.304620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.673 [2024-06-10 12:09:31.304626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.673 qpair failed and we were unable to recover it. 00:31:37.673 [2024-06-10 12:09:31.304965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.305305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.305312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.305753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.306093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.306099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.306437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.306784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.306791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.307130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.307484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.307491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.307827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.308176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.308182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.308517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.308746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.308753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.308984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.309183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.309191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.309561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.309831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.309837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.310214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.310571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.310579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.310956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.311308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.311314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.311687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.311917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.311924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.312301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.312645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.312651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.312986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.313326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.313333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.313689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.314033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.314039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.314378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.314720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.314726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.315063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.315417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.315424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.315835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.316177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.316183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.316517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.316905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.316912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.317270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.317615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.317621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.317962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.318344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.318350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.318687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.319020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.319026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.319354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.319735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.319741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.320076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.320420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.320427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.320783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.321169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.321176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.674 qpair failed and we were unable to recover it. 00:31:37.674 [2024-06-10 12:09:31.321525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.321782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.674 [2024-06-10 12:09:31.321788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.322123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.322265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.322272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.322612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.322995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.323001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.323338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.323725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.323732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.324076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.324416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.324422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.324767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.325122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.325129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.325504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.325847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.325853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.326145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.326514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.326521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.326861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.327183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.327189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.327402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.327754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.327760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.328103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.328444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.328451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.328831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.329169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.329175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.329524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.329866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.329872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.330206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.330550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.330562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.330897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.331239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.331249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.331603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.331945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.331951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.332161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.332535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.332541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.332876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.333217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.333223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.333563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.333908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.333914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.334251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.334623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.334629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.334998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.335339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.335346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.335716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.335871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.335879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.336252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.336596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.336602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.336944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.337284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.337293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.337648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.337996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.338002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.338221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.338592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.338599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.338936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.339315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.339321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.339663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.340002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.340008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.340378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.340714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.340720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.675 qpair failed and we were unable to recover it. 00:31:37.675 [2024-06-10 12:09:31.341048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.675 [2024-06-10 12:09:31.341381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.341387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.341712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.341980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.341987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.342342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.342708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.342714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.343051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.343241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.343251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.343483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.343864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.343872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.344283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.344624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.344630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.345008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.345355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.345362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.345755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.346082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.346088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.346343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.346683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.346689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.346936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.347272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.347278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.347535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.347900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.347906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.348246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.348617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.348623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.348963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.349309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.349316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.349669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.350008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.350014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.350355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.350738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.350745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.351127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.351496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.351502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.351882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.352225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.352231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.352575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.353000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.353006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.353443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.353864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.353873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.354203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.354597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.354604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.354943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.355287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.355294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.355637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.355990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.355996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.356318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.356679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.356685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.356923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.357273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.357279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.357614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.357957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.357963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.358298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.358640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.358648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.359004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.359344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.359350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.359729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.360099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.360105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.360326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.360698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.360704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.676 qpair failed and we were unable to recover it. 00:31:37.676 [2024-06-10 12:09:31.361041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.676 [2024-06-10 12:09:31.361383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.361389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.361723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.362065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.362071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.362418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.362763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.362769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.363104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.363448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.363454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.363864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.364201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.364208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.364561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.364906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.364912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.365100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.365458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.365464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.365799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.366179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.366185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.366539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.366967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.366973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.367264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.367627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.367633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.367685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.368012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.368019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.368264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.368579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.368585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.368928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.369223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.369229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.369608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.369872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.369878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.370213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.370480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.370487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.370864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.371214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.371221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.371596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.371982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.371989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.372327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.372668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.372674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.372887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.373237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.373249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.373516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2156453 Killed "${NVMF_APP[@]}" "$@" 00:31:37.677 [2024-06-10 12:09:31.373867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.373874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 12:09:31 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:31:37.677 [2024-06-10 12:09:31.374235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 12:09:31 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:37.677 [2024-06-10 12:09:31.374590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.374596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 12:09:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:37.677 [2024-06-10 12:09:31.374851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 12:09:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:37.677 [2024-06-10 12:09:31.375202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.375208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 12:09:31 -- common/autotest_common.sh@10 -- # set +x 00:31:37.677 [2024-06-10 12:09:31.375552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.375896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.375903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.376239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.376600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.376607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.376921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.377138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.377146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.377553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.377807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.377814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.378050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.378410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.378416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.677 [2024-06-10 12:09:31.378791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.378987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.677 [2024-06-10 12:09:31.378996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.677 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.379345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.379711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.379718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.380021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.380365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.380373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.380763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.381111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.381119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.381339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.381692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.381700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 12:09:31 -- nvmf/common.sh@469 -- # nvmfpid=2157505 00:31:37.678 [2024-06-10 12:09:31.382132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 12:09:31 -- nvmf/common.sh@470 -- # waitforlisten 2157505 00:31:37.678 [2024-06-10 12:09:31.382467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.382475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 12:09:31 -- common/autotest_common.sh@819 -- # '[' -z 2157505 ']' 00:31:37.678 12:09:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:37.678 [2024-06-10 12:09:31.382839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 12:09:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.678 12:09:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:37.678 [2024-06-10 12:09:31.383230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.383246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 12:09:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.678 [2024-06-10 12:09:31.383505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 12:09:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:37.678 12:09:31 -- common/autotest_common.sh@10 -- # set +x 00:31:37.678 [2024-06-10 12:09:31.383852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.383871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.384126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.384508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.384517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.384877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.385215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.385223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.385444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.385789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.385798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.386146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.386553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.386562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.386804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.387102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.387110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.387480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.387860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.387869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.388093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.388255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.388267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.388697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.389091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.389100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.389560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.389980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.389991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.390452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.390820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.390831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.391198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.391534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.391542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.391768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.392112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.392120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.392432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.392818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.392827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.393182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.393490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.393499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.393862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.394204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.394212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.394580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.394826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.394833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.395208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.395488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.395496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.395853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.396237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.396255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.678 qpair failed and we were unable to recover it. 00:31:37.678 [2024-06-10 12:09:31.396596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.678 [2024-06-10 12:09:31.396977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.396985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.397214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.397541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.397549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.397870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.398259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.398267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.398595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.398914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.398922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.399291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.399442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.399449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.399790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.400133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.400141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.400522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.400887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.400894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.401204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.401548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.401556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.401912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.402301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.402308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.402674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.403064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.403071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.403447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.403674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.403682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.404052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.404366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.404374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.404730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.405027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.405034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.405443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.405800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.405807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.406164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.406355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.406364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.406617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.406980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.406988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.407365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.407768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.407776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.408105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.408512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.408520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.408871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.409055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.409064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.409423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.409818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.409825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.410182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.410390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.410398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.410451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.410783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.410791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.411099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.411436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.411444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.411807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.412116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.412124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.412503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.412850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.412858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.413245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.413613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.413620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.679 qpair failed and we were unable to recover it. 00:31:37.679 [2024-06-10 12:09:31.413977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.679 [2024-06-10 12:09:31.414370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.414377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.414729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.415113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.415120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.415497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.415894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.415903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.416249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.416428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.416437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.416816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.417172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.417180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.417570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.417848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.417856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.418252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.418466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.418474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.418674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.419000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.419007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.419205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.419542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.419550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.419776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.420129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.420138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.420525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.420916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.420924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.421323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.421663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.421671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.422076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.422461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.422469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.422829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.423219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.423227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.423533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.423928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.423936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.424287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.424661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.424669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.424942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.425113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.425121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.425281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.425626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.425633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.425992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.426340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.426348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.426679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.427044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.427051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.427429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.427777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.680 [2024-06-10 12:09:31.427784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.680 qpair failed and we were unable to recover it. 00:31:37.680 [2024-06-10 12:09:31.428155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.952 [2024-06-10 12:09:31.428528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.952 [2024-06-10 12:09:31.428537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.952 qpair failed and we were unable to recover it. 00:31:37.952 [2024-06-10 12:09:31.428845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.952 [2024-06-10 12:09:31.429250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.952 [2024-06-10 12:09:31.429258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.952 qpair failed and we were unable to recover it. 00:31:37.952 [2024-06-10 12:09:31.429590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.952 [2024-06-10 12:09:31.429984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.952 [2024-06-10 12:09:31.429991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.952 qpair failed and we were unable to recover it. 00:31:37.952 [2024-06-10 12:09:31.430355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.430749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.430758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.431127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.431551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.431558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.431918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.432266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.432274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.432596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.432986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.432994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.433128] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:37.953 [2024-06-10 12:09:31.433175] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.953 [2024-06-10 12:09:31.433357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.433723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.433729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.434134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.434461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.434470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.434830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.435219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.435227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.435584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.435856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.435864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.436233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.436628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.436637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.437027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.437399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.437408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.437767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.437999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.438008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.438392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.438782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.438790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.438947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.439344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.439353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.439613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.439967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.439975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.440355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.440689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.440697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.440921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.441267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.441276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.441507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.441900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.441908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.442266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.442512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.442520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.442886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.443234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.443245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.443700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.443936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.443945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.444365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.444718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.444727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.444921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.445295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.445304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.445507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.445828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.445837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.446224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.446609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.953 [2024-06-10 12:09:31.446618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.953 qpair failed and we were unable to recover it. 00:31:37.953 [2024-06-10 12:09:31.446979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.447160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.447168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.447504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.447869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.447877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.448240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.448459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.448468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.448850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.449244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.449253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.449600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.449949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.449957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.450317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.450702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.450712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.450961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.451311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.451320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.451556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.451903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.451912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.452273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.452584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.452592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.452961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.453308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.453316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.453671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.454059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.454068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.454289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.454668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.454675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.454871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.455196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.455204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.455551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.455940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.455947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.456148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.456483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.456491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.456879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.457278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.457287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.457567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.457915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.457923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.458118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.458467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.458475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.458846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.459245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.459254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.459595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.459795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.459803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.460116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.460498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.460506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.460679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.461028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.461035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.461395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.461574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.461582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.461918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.462125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.462133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.462326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.462722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.462729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.463100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.463470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.463478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.463847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.464150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.464157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.954 qpair failed and we were unable to recover it. 00:31:37.954 [2024-06-10 12:09:31.464550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.954 [2024-06-10 12:09:31.464948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.464956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.955 [2024-06-10 12:09:31.465342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.465737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.465745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.465973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.466365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.466373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.466752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.467097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.467105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.467337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.467727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.467734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.468118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.468516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.468523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.468891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.469237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.469249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.469586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.469977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.469984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.470478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.470898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.470909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.471292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.471630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.471637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.472009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.472402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.472411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.472777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.473122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.473129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.473483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.473875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.473883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.474250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.474582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.474591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.474950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.475231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.475239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.475501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.475892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.475899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.476286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.476666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.476674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.476932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.477323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.477331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.477703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.478093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.478100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.478509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.478850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.478859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.479239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.479596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.479603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.479961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.480274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.480282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.480634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.480865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.480874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.481135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.481506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.481514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.481893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.482096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.482103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.482463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.482807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.482814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.955 qpair failed and we were unable to recover it. 00:31:37.955 [2024-06-10 12:09:31.483067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.955 [2024-06-10 12:09:31.483371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.483379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.483748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.484143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.484150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.484508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.484863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.484871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.485323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.485641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.485648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.485871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.486063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.486071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.486373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.486721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.486729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.486996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.487380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.487387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.487655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.488045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.488053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.488421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.488810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.488817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.489171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.489528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.489536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.489924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.490268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.490275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.490445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.490797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.490805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.491027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.491207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.491216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.491544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.491900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.491907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.492285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.492638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.492646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.493004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.493399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.493406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.493669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.494018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.494026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.494341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.494727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.494735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.495086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.495454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.495461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.495823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.496165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.496173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.496482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.496870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.496877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.497100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.497332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.497340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.497652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.498003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.498011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.498322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.498512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.498520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.498862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.499255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.499262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.499623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.499854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.499861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.500240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.500569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.956 [2024-06-10 12:09:31.500577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.956 qpair failed and we were unable to recover it. 00:31:37.956 [2024-06-10 12:09:31.500848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.501190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.501197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.501552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.501946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.501954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.502314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.502660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.502667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.502915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.503265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.503273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.503639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.503981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.503989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.504371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.504711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.504718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.505108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.505460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.505467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.505845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.506234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.506241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.506501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.506885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.506893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.507088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.507460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.507468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.507842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.508079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.508087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.508472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.508900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.508908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.509268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.509491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.509499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.509874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.510202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.510209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.510424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.510780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.510787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.511168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.511512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.511520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.511959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.512267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.512275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.512656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.512973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.512982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.513339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.513496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.513504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.513742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.514100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.514107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.514473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.514835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.514843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.515076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.515427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.957 [2024-06-10 12:09:31.515434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.957 qpair failed and we were unable to recover it. 00:31:37.957 [2024-06-10 12:09:31.515790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.516131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.516139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.516193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.516539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.516547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.516888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.517237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.517247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.517463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.517818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.517827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.518248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.518248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:37.958 [2024-06-10 12:09:31.518631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.518639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.518997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.519343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.519351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.519725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.520045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.520053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.520414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.520786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.520795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.521169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.521531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.521538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.521923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.522312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.522320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.522692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.523087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.523095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.523457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.523803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.523811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.524173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.524415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.524423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.524822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.525176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.525184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.525629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.525945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.525952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.526313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.526674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.526682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.526948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.527290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.527298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.527626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.527970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.527978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.528336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.528715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.528722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.529079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.529293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.529301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.529470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.529737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.529744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.530123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.530495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.530503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.530866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.531209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.531217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.531564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.531950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.531958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.958 qpair failed and we were unable to recover it. 00:31:37.958 [2024-06-10 12:09:31.532276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.958 [2024-06-10 12:09:31.532681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.532689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.533068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.533428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.533436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.533795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.534185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.534193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.534546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.534915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.534922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.535269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.535662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.535670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.536050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.536279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.536287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.536575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.536622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.536629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.536967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.537190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.537197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.537622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.538009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.538016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.538400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.538772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.538779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.539138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.539504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.539512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.539878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.540220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.540228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.540330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.540537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.540545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.540921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.541045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.541053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.541418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.541648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.541656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.542006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.542353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.542362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.542607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.542952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.542959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.543351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.543715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.543722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.543938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.544277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.544285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.544636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.544999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.545007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.545361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.545597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.545605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.545982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.546135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.546143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.546511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.546806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.546814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.547258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.547463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.547471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.547825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.548165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.548173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.548531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.548920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.548929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.549311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.549691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.549699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.959 qpair failed and we were unable to recover it. 00:31:37.959 [2024-06-10 12:09:31.550059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.959 [2024-06-10 12:09:31.550393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.550401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.550633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.550879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.550887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.551228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.551509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.551517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.551878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.552224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.552232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.552590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.552812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.552819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.553018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.553250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.553258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.553542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.554021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.554028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.554247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.554586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.554594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.554957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.555304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.555312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.555735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.556124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.556133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.556397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.556610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.556618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.556986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.557359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.557367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.557596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.557929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.557936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.558292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.558571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.558579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.558775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.559114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.559121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.559495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.559681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.559688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.560058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.560439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.560446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.560666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.561041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.561049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.561436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.561779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.561787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.562143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.562500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.562508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.562733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.563027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.563035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.563403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.563773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.563780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.564039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.564396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.564404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.564769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.565159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.565167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-06-10 12:09:31.565569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.960 [2024-06-10 12:09:31.565992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.565999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.566355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.566707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.566715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.566912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.567257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.567265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.567602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.567988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.567995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.568266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.568656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.568663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.569060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.569450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.569457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.569835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.570116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.570124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.570510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.570898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.570905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.571251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.571596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.571604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.571958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.572346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.572354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.572729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.573075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.573082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.573434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.573779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.573788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.574147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.574501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.574508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.574874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.575265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.575272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.575631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.576057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.576065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.576416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.576715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.576723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.577105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.577486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.577494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.577749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.578128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.578136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.578516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.578901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.578908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.579274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.579633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.579643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.580035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.580424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.580432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.580788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.581017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.581025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.581278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:37.961 [2024-06-10 12:09:31.581341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.581406] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.961 [2024-06-10 12:09:31.581416] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.961 [2024-06-10 12:09:31.581424] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.961 [2024-06-10 12:09:31.581568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:37.961 [2024-06-10 12:09:31.581702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.581710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.581699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:37.961 [2024-06-10 12:09:31.581745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:37.961 [2024-06-10 12:09:31.582086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.582264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.582273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.582651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.583017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.583025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-06-10 12:09:31.583182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.961 [2024-06-10 12:09:31.583540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.583547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.583937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.584171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.584178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.581746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:37.962 [2024-06-10 12:09:31.584629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.585028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.585036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.585402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.585641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.585648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.585885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.586204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.586214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.586408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.586633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.586641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.586736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.587130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.587137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.587370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.587628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.587636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.588000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.588235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.588247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.588435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.588823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.588831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.589095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.589428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.589437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.589659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.589887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.589895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.590169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.590524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.590533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.590668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.591003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.591010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.591360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.591488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.591496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.591847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.592130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.592139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.592376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.592731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.592738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.593123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.593507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.593514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.593786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.594179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.594188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.594548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.594779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.594787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.595148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.595521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.595529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.595917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.596118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.596125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-06-10 12:09:31.596481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.962 [2024-06-10 12:09:31.596869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.596878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.597239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.597611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.597619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.597969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.598207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.598215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.598481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.598876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.598884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.599265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.599637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.599645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.600008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.600400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.600408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.600764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.601010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.601017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.601392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.601703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.601710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.602071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.602455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.602463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.602579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.602922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.602929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.603292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.603674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.603683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.604070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.604459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.604468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.604821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.605173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.605181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.605543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.605931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.605939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.606295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.606625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.606634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.606961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.607069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.607075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.607293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.607650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.607658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.607903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.608064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.608073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.608451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.608795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.608803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.609182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.609577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.609584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.609947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.610293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.610303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.610391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.610706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.610713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.610809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.611190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.611197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.611578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.611968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.611976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.612200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.612549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.612557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.963 [2024-06-10 12:09:31.612781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.613176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.963 [2024-06-10 12:09:31.613184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.963 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.613549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.613941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.613949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.614027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.614251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.614258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.614572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.614875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.614883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.615252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.615622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.615631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.615905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.616140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.616149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.616511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.616787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.616795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.617056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.617399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.617407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.617768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.618168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.618176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.618542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.618697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.618706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.619058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.619417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.619425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.619788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.620014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.620022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.620251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.620553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.620560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.620917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.621259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.621267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.621640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.621989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.621998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.622197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.622537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.622546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.622770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.622830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.622837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.623022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.623367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.623376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.623767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.624003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.624010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.624373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.624817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.624825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.625179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.625541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.625549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.625771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.626129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.626137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.626520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.626867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.626875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.627250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.627463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.627472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.627832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.628178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.628185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.628550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.628788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.628796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.629053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.629453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.629462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.629658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.629981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.964 [2024-06-10 12:09:31.629988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.964 qpair failed and we were unable to recover it. 00:31:37.964 [2024-06-10 12:09:31.630347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.630616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.630624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.630985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.631215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.631223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.631595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.631943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.631952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.632181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.632351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.632358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.632579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.632951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.632958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.633321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.633676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.633684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.633939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.634327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.634335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.634540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.634823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.634830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.635057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.635422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.635430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.635662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.636060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.636067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.636298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.636409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.636415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.636658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.636897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.636907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.637262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.637584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.637593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.637958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.638352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.638361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.638722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.638979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.638987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.639356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.639709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.639716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.639941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.640311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.640319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.640614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.640989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.640997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.641329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.641531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.641539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.641916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.642267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.642276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.642662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.643059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.643066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.643301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.643461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.643467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.643678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.643852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.643860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.644181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.644411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.644419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.644806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.645207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.645214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.645625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.646016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.646024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.646401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.646789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.646797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.965 qpair failed and we were unable to recover it. 00:31:37.965 [2024-06-10 12:09:31.647155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.965 [2024-06-10 12:09:31.647508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.647516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.647877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.648270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.648278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.648649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.648995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.649003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.649356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.649714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.649721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.650035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.650413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.650420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.650643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.650912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.650919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.651143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.651409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.651417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.651806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.652149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.652156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.652578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.652951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.652959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.653317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.653613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.653621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.653979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.654325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.654333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.654689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.655030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.655038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.655418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.655802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.655809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.656110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.656493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.656500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.656862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.657074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.657081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.657450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.657658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.657665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.658016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.658253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.658261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.658467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.658649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.658657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.659027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.659309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.659317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.659551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.659833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.659841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.660165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.660406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.660414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.660781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.661016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.661025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.661382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.661752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.661760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.662150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.662509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.662517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.662882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.663276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.663284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.663556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.663947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.966 [2024-06-10 12:09:31.663954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.966 qpair failed and we were unable to recover it. 00:31:37.966 [2024-06-10 12:09:31.664317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.664720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.664728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.665112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.665495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.665502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.665940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.666237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.666251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.666612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.667005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.667013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.667372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.667604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.667611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.667943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.668115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.668123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.668468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.668856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.668863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.669222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.669401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.669408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.669784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.670131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.670139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.670515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.670577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.670584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.670918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.671270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.671277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.671643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.671988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.671995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.672267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.672637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.672645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.673029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.673398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.673406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.673751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.673984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.673992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.674350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.674702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.674709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.675078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.675462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.675470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.675857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.676251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.676259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.676594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.676991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.676998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.677361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.677760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.677767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.678121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.678505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.678512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.967 qpair failed and we were unable to recover it. 00:31:37.967 [2024-06-10 12:09:31.678900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.679289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.967 [2024-06-10 12:09:31.679297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.679654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.679819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.679826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.679918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.680248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.680257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.680451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.680643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.680651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.680856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.681184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.681191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.681418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.681800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.681807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.682032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.682430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.682439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.682643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.682969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.682976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.683372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.683746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.683753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.684091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.684438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.684446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.684833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.685231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.685239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.685612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.686006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.686013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.686408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.686459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.686465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.686654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.687030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.687037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.687317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.687700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.687707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.687986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.688378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.688386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.688761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.689153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.689161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.689397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.689784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.689792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.690144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.690509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.690517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.690881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.690930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.690937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.691156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.691491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.691499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.691840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.692232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.692241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.692461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.692835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.692844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.693002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.693360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.693369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.693429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.693790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.693799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.693853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.694197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.694204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.694545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.694790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.694797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.695158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.695494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.695503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.968 qpair failed and we were unable to recover it. 00:31:37.968 [2024-06-10 12:09:31.695891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.968 [2024-06-10 12:09:31.696289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.696296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.696672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.697014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.697022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.697247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.697407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.697414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.697795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.698188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.698196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.698572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.698927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.698935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.699158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.699495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.699504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.699855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.700203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.700212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.700522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.700752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.700760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.701141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.701410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.701417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.701677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.702077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.702085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.702460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.702801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.702808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.703160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.703480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.703487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.703871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.704073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.704081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.704462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.704700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.704707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.705068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.705301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.705309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.705619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.705993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.706000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.706224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.706567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.706578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.706899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.707264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.707271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.707639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.707985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.707992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.708221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.708459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.708469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.708861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.709248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.709256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.709610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.709841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.709849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.710054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.710409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.710417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.710640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.710987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.710995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.711325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.711558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.711567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.711928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.712317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.712325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.969 qpair failed and we were unable to recover it. 00:31:37.969 [2024-06-10 12:09:31.712552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.712940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.969 [2024-06-10 12:09:31.712949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.970 qpair failed and we were unable to recover it. 00:31:37.970 [2024-06-10 12:09:31.713173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.970 [2024-06-10 12:09:31.713370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.970 [2024-06-10 12:09:31.713378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.970 qpair failed and we were unable to recover it. 00:31:37.970 [2024-06-10 12:09:31.713734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.970 [2024-06-10 12:09:31.714133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.970 [2024-06-10 12:09:31.714141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:37.970 qpair failed and we were unable to recover it. 00:31:38.239 [2024-06-10 12:09:31.714531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.239 [2024-06-10 12:09:31.714765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.239 [2024-06-10 12:09:31.714773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.239 qpair failed and we were unable to recover it. 00:31:38.239 [2024-06-10 12:09:31.715132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.239 [2024-06-10 12:09:31.715343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.239 [2024-06-10 12:09:31.715351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.239 qpair failed and we were unable to recover it. 00:31:38.239 [2024-06-10 12:09:31.715589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.715980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.715988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.716371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.716724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.716732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.717092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.717442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.717450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.717809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.718196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.718204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.718555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.718901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.718908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.719291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.719478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.719486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.719760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.720109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.720117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.720517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.720873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.720881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.721295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.721624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.721632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.721871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.722108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.722116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.722520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.722871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.722879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.723229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.723596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.723604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.723828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.723983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.723990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.724333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.724718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.724726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.725088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.725383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.725391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.725561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.725893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.725901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.726264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.726601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.726609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.726994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.727387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.727396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.727757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.728149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.728157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.728352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.728732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.728740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.729099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.729403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.729411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.729762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.730156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.730165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.730468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.730799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.730806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.731176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.731415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.731424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.731813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.732070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.732078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.732300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.732501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.732517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.732877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.733108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.733116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.733472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.733628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.733636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.240 qpair failed and we were unable to recover it. 00:31:38.240 [2024-06-10 12:09:31.733855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.240 [2024-06-10 12:09:31.734209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.734217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.734598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.734944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.734951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.735164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.735499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.735507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.735871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.736228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.736236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.736492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.736844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.736852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.737074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.737406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.737414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.737774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.738168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.738176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.738375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.738605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.738613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.738979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.739194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.739201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.739642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.739984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.739991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.740196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.740565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.740573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.740933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.741327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.741334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.741705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.742052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.742060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.742442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.742835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.742842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.743198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.743582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.743590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.743814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.744210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.744218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.744570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.744935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.744943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.745335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.745554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.745561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.745925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.746155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.746163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.746368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.746723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.746731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.746955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.747333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.747341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.747732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.748128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.748135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.748520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.748738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.748745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.749142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.749361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.749370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.749603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.749958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.749965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.750359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.750738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.750746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.750975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.751369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.751377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.241 qpair failed and we were unable to recover it. 00:31:38.241 [2024-06-10 12:09:31.751743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.241 [2024-06-10 12:09:31.751980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.751987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.752348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.752734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.752742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.752928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.753270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.753278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.753677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.753743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.753749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.754086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.754285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.754293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.754669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.755065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.755072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.755449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.755844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.755851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.756214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.756267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.756274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.756589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.756984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.756992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.757184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.757292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.757300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.757695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.758042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.758050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.758411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.758765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.758772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.759135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.759480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.759487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.759853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.760206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.760214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.760437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.760788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.760796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.761155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.761359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.761366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.761427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.761743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.761750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.761847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.762112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.762119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.762501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.762850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.762858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.763219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.763584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.763591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.763909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.764303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.764311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.764539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.764933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.764941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.765316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.765695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.765703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.765926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.766318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.766325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.766695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.766949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.766956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.767317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.767710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.767717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.768108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.768490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.768498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.768853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.769247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.769256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.769608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.769999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.242 [2024-06-10 12:09:31.770007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.242 qpair failed and we were unable to recover it. 00:31:38.242 [2024-06-10 12:09:31.770361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.770708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.770716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.771097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.771327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.771335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.771721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.772070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.772078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.772438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.772782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.772790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.773042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.773091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.773098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.773460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.773808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.773816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.774037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.774199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.774207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.774577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.774967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.774975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.775335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.775723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.775731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.776116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.776395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.776402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.776771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.777166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.777174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.777535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.777748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.777755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.778004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.778355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.778363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.778699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.779090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.779098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.779458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.779672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.779679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.779898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.780270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.780278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.780537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.780929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.780936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.781154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.781385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.781393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.781574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.781924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.781931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.782215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.782455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.782462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.782831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.783224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.783231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.783627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.783974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.783981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.784185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.784530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.784539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.784899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.785131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.785139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.785522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.785756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.785764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.786147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.786365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.786372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.786590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.786743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.786752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.787000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.787168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.787177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.243 [2024-06-10 12:09:31.787518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.787753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.243 [2024-06-10 12:09:31.787760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.243 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.787983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.788373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.788380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.788741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.789133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.789140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.789418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.789597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.789605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.790037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.790283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.790293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.790703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.791098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.791105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.791468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.791868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.791875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.792103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.792488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.792496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.792705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.793095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.793102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.793463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.793809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.793816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.794176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.794415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.794423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.794787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.795135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.795143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.795343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.795709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.795717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.795909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.796264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.796272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.796636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.797028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.797037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.797423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.797683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.797691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.798053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.798444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.798452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.798801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.799148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.799155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.799525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.799872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.799879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.800298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.800467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.800476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.800816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.801195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.801203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.801563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.801802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.801810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.802172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.802405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.802412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.802792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.803091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.244 [2024-06-10 12:09:31.803098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.244 qpair failed and we were unable to recover it. 00:31:38.244 [2024-06-10 12:09:31.803304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.803627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.803636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.804009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.804399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.804406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.804772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.805165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.805173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.805536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.805884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.805892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.806293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.806644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.806651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.807042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.807273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.807281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.807606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.807999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.808006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.808367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.808739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.808746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.809106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.809321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.809329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.809663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.809862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.809870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.810254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.810619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.810629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.810987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.811332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.811340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.811726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.811892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.811898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.812275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.812631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.812638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.813009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.813060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.813066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.813428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.813774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.813783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.814148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.814507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.814515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.814885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.814949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.814955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.815295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.815554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.815561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.815755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.816102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.816109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.816337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.816708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.816716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.816769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.817175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.817182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.817547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.817779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.817787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.818146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.818328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.818337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.818687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.818919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.818927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.819157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.819489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.819497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.819724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.820119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.820126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.820333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.820705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.820712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.245 [2024-06-10 12:09:31.821075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.821409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.245 [2024-06-10 12:09:31.821416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.245 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.821799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.822194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.822202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.822560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.822960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.822968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.823161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.823505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.823513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.823740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.824132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.824140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.824349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.824671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.824678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.824962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.825355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.825363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.825694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.826081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.826088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.826447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.826789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.826797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.827114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.827505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.827513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.827875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.828107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.828115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.828522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.828915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.828923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.829280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.829637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.829644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.829874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.830079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.830086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.830424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.830599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.830608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.830987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.831255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.831263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.831633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.831987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.831995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.832379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.832721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.832729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.833093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.833479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.833486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.833845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.834076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.834083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.834447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.834809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.834817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.835208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.835596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.835604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.835964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.836330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.836338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.836704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.837098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.837106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.837460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.837849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.837856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.838238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.838463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.838472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.838832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.839175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.839183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.839337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.839390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.839396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.839600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.839814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.839821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.246 qpair failed and we were unable to recover it. 00:31:38.246 [2024-06-10 12:09:31.840046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.840396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.246 [2024-06-10 12:09:31.840403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.840770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.841160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.841168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.841527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.841801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.841808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.842066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.842428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.842436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.842820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.843043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.843050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.843260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.843668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.843675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.844027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.844265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.844273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.844642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.845032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.845040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.845101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.845424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.845431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.845876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.846109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.846116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.846191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.846389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.846397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.846778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.847170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.847179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.847407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.847774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.847782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.847869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.848264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.848272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.848505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.848849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.848856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.849206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.849592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.849600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.849990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.850374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.850382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.850749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.851138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.851146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.851452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.851843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.851850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.852232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.852599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.852607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.852959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.853350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.853358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.853706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.853907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.853915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.854288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.854687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.854694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.854902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.855237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.855248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.855623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.855872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.855879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.856238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.856580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.856588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.856970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.857204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.857211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.857419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.857739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.857746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.858127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.858281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.858288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.247 qpair failed and we were unable to recover it. 00:31:38.247 [2024-06-10 12:09:31.858620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.247 [2024-06-10 12:09:31.858911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.858918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.859142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.859518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.859526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.859733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.860088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.860095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.860480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.860711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.860719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.861080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.861315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.861322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.861596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.861826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.861833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.862194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.862561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.862568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.862954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.863346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.863353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.863575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.863922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.863929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.864280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.864614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.864621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.864986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.865338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.865346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.865586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.865937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.865944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.866169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.866557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.866564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.866919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.867084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.867090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.867301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.867666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.867674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.867870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.868255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.868262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.868630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.868863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.868870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.869038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.869220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.869227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.869571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.869965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.869973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.870352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.870583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.870590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.870994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.871229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.871237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.871438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.871802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.871809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.872033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.872424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.872431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.872813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.873206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.873214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.873566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.873779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.873787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.873961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.874202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.874209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.874554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.874723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.874730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.874926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.875315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.875323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.875703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.876094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.248 [2024-06-10 12:09:31.876101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.248 qpair failed and we were unable to recover it. 00:31:38.248 [2024-06-10 12:09:31.876462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.876675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.876683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.876874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.877262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.877270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.877692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.878018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.878026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.878385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.878769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.878776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.879141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.879497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.879504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.879727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.880070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.880077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.880435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.880614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.880621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.880998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.881391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.881399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.881696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.881927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.881934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.882292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.882623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.882631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.882853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.883257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.883265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.883628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.883973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.883980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.884339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.884708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.884715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.884911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.885277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.885285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.885518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.885911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.885918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.886124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.886366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.886374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.886592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.886961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.886968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.887135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.887539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.887546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.887928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.888275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.888283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.888643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.889035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.889043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.889429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.889765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.889772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.890147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.890508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.890515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.890866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.891214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.891221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.891577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.891975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.891983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.249 qpair failed and we were unable to recover it. 00:31:38.249 [2024-06-10 12:09:31.892346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.892744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.249 [2024-06-10 12:09:31.892752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.893121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.893166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.893172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.893372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.893730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.893739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.893964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.894363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.894370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.894555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.894929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.894936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.895323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.895562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.895570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.895920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.896309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.896317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.896698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.896980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.896988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.897214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.897598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.897606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.898046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.898381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.898388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.898785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.899184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.899191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.899538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.899821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.899828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.900050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.900393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.900403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.900808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.901200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.901208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.901597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.901995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.902003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.902180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.902530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.902537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.902924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.903153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.903161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.903519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.903911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.903918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.904307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.904361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.904367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.904566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.904927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.904934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.905316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.905676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.905684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.906068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.906300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.906309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.906640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.907033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.907042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.907265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.907330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.907337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.907687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.908086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.908093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.908453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.908785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.908793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.909173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.909530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.909537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.909592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.909932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.909940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.910344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.910582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.910589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.250 qpair failed and we were unable to recover it. 00:31:38.250 [2024-06-10 12:09:31.911010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.250 [2024-06-10 12:09:31.911130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.911137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.911333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.911677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.911685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.911910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.912257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.912265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.912637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.913021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.913030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.913391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.913782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.913790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.914179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.914435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.914443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.914810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.915201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.915208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.915581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.915974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.915982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.916297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.916686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.916693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.916953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.917211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.917218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.917593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.917943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.917950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.918307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.918542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.918549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.918691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.918901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.918908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.919262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.919311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.919317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.919507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.919840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.919847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.920207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.920496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.920504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.920867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.921058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.921067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.921471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.921814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.921822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.922181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.922394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.922402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.922756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.923101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.923108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.923477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.923803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.923810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.924183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.924556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.924563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.924849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.925208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.925215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.925569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.925967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.925974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.926200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.926425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.926433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.926656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.927041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.927049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.927250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.927483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.927490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.927849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.928240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.928250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.928606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.928952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.251 [2024-06-10 12:09:31.928960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.251 qpair failed and we were unable to recover it. 00:31:38.251 [2024-06-10 12:09:31.929240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.929630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.929637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.929997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.930385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.930392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.930796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.931140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.931148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.931529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.931816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.931824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.932199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.932541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.932549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.932756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.932921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.932929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.933261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.933603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.933611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.933979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.934221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.934229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.934431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.934791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.934798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.935171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.935533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.935541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.935735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.936120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.936128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.936355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.936753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.936760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.937160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.937349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.937356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.937715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.938058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.938065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.938428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.938645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.938653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.938894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.939053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.939061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.939439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.939788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.939796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.940151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.940364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.940371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.940602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.940951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.940958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.941398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.941565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.941573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.941940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.942141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.942150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.942529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.942872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.942880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.943248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.943428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.943436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.943808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.944041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.944048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.944270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.944627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.944635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.944995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.945386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.945393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.945619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.945938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.945946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.946353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.946565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.946573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.252 [2024-06-10 12:09:31.946959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.947171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.252 [2024-06-10 12:09:31.947179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.252 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.947515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.947906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.947913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.948206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.948590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.948598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.948829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.948997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.949005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.949220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.949530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.949538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.949904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.950248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.950256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.950673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.951071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.951078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.951442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.951680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.951687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.952074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.952389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.952398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.952792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.953146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.953153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.953383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.953679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.953686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.954086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.954318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.954325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.954604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.954951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.954958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.955323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.955529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.955537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.955804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.956154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.956162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.956520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.956753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.956760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.956951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.957309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.957317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.957621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.957827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.957835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.958192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.958547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.958554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.958936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.959327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.959334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.959716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.960106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.960114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.960477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.960869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.960877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.961239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.961622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.961630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.961988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.962381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.962388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.253 [2024-06-10 12:09:31.962780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.963175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.253 [2024-06-10 12:09:31.963183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.253 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.963545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.963942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.963950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.964147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.964490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.964498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.964860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.965249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.965257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.965497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.965728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.965735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.966120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.966512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.966520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.966879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.967300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.967308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.967673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.968069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.968076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.968289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.968651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.968659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.968822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.969040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.969047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.969360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.969732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.969740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.969963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.970351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.970359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.970731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.971077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.971084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.971295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.971342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.971348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.971626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.972004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.972012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.972209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.972565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.972572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.972795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.973081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.973088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.973454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.973848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.973856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.974058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.974252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.974261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.974322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.974693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.974700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.975124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.975466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.975474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.975835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.976227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.976234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.976463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.976811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.976818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.977187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.977568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.977576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.977943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.978333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.978340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.978706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.978879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.978887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.979244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.979619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.979626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.979852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.980203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.980211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.980593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.980979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.980987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.254 qpair failed and we were unable to recover it. 00:31:38.254 [2024-06-10 12:09:31.981349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.254 [2024-06-10 12:09:31.981564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.981572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.981779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.982125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.982132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.982510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.982860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.982867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.983262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.983593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.983600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.983959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.984162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.984170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.984542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.984938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.984945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.985169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.985289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.985296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.985532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.985922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.985929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.986368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.986671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.986679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.987032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.987315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.987323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.987659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.988052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.988060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.988265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.988319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.988327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.988524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.988907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.988914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.989267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.989504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.989512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.989875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.990274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.990282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.990633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.991026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.991034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.991404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.991774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.991782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.992139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.992496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.992503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.992712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.993028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.993036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.993406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.993760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.993768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.994130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.994485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.994493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.994761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.995096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.995103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.995536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.995927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.995934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.996327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.996428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.996435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.996644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.996998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.997007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.997360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.997754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.997762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.998124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.998486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.998493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.998717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.999107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.999114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.999335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.999506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:31.999514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.255 qpair failed and we were unable to recover it. 00:31:38.255 [2024-06-10 12:09:31.999870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.255 [2024-06-10 12:09:32.000111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.256 [2024-06-10 12:09:32.000118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.256 qpair failed and we were unable to recover it. 00:31:38.256 [2024-06-10 12:09:32.000320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.256 [2024-06-10 12:09:32.000679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.256 [2024-06-10 12:09:32.000686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.256 qpair failed and we were unable to recover it. 00:31:38.256 [2024-06-10 12:09:32.001026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.256 [2024-06-10 12:09:32.001401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.256 [2024-06-10 12:09:32.001409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.256 qpair failed and we were unable to recover it. 00:31:38.256 [2024-06-10 12:09:32.001770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.256 [2024-06-10 12:09:32.002110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.256 [2024-06-10 12:09:32.002118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.256 qpair failed and we were unable to recover it. 00:31:38.256 [2024-06-10 12:09:32.002509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.533 [2024-06-10 12:09:32.002782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.533 [2024-06-10 12:09:32.002790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.002997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.003248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.003258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.003586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.003977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.003984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.004344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.004554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.004562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.004785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.005128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.005136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.005517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.005752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.005760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.006148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.006364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.006372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.006722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.006956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.006963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.007312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.007709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.007717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.008022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.008321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.008328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.008563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.008953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.008961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.009310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.009656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.009665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.009891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.010283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.010291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.010515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.010906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.010913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.011292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.011670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.011678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.012042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.012422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.012430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.012798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.013029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.013036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.013259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.013490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.013498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.013859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.014252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.014260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.014436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.014670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.014678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.015036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.015385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.015392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.534 qpair failed and we were unable to recover it. 00:31:38.534 [2024-06-10 12:09:32.015627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.534 [2024-06-10 12:09:32.015970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.015979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.016361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.016594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.016602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.016963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.017195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.017202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.017559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.017907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.017915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.018110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.018445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.018452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.018832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.019224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.019231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.019583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.019929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.019937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.020227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.020428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.020436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.020647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.020836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.020844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.021211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.021590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.021598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.021958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.022347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.022355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.022733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.022935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.022942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.023311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.023559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.023566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.023917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.024130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.024137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.024374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.024763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.024771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.025176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.025563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.025571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.025933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.026327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.026334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.026684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.027073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.027080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.027304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.027696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.027704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.027931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.028285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.028293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.028653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.028814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.028823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.029220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.029556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.029565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.029925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.030120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.030127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.030468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.030846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.030853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.031195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.031554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.031560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.031942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.032297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.032304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.032666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.033101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.033108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.535 qpair failed and we were unable to recover it. 00:31:38.535 [2024-06-10 12:09:32.033450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.033676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.535 [2024-06-10 12:09:32.033683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.033985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.034362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.034369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.034723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.035068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.035076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.035429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.035702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.035709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.036076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.036446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.036452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.036506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.036836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.036843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.037087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.037496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.037503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.037849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.038241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.038252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.038649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.039007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.039014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.039364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.039747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.039753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.040098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.040462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.040469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.040663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.041006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.041013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.041262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.041619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.041626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.041889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.042247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.042254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.042597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.042949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.042955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.043300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.043655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.043661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.043961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.044186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.044193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.044533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.044718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.044725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.045080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.045438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.045445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.536 [2024-06-10 12:09:32.045780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.046129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.536 [2024-06-10 12:09:32.046135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.536 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.046491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.046840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.046847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.047190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.047548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.047556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.047902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.048335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.048342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.048708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.048998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.049004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.049336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.049568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.049575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.049635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.049988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.049995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.050339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.050669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.050676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.050929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.051158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.051164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.051520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.051867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.051874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.052089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.052434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.052441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.052795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.053192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.053200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.053417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.053613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.053620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.053978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.054201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.054207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.054379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.054723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.054730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.054947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.055326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.055334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.055699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.055921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.055927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.056295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.056651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.537 [2024-06-10 12:09:32.056658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.537 qpair failed and we were unable to recover it. 00:31:38.537 [2024-06-10 12:09:32.057007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.057354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.057361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.057695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.057926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.057933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.058303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.058678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.058685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.058908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.059089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.059095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.059447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.059799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.059806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.060024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.060405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.060412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.060787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.061068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.061075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.061267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.061464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.061472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.061818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.062216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.062223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.062597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.062954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.062961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.063329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.063555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.063561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.063966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.064275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.064281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.064492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.064844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.064851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.065194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.065554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.065561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.065789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.066142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.066149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.066502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.066856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.066863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.067221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.067558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.067565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.067911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.068262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.068268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.538 qpair failed and we were unable to recover it. 00:31:38.538 [2024-06-10 12:09:32.068513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.068905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.538 [2024-06-10 12:09:32.068913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.069239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.069459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.069466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.069705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.070051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.070058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.070251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.070462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.070468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.070929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.071276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.071283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.071713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.071917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.071925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.072145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.072564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.072572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.072912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.073260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.073268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.073488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.073862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.073869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.074220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.074596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.074603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.074829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.075113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.075120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.075311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.075708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.075715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.076061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.076431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.076438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.076675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.077052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.077058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.077333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.077702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.077709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.078049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.078401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.078409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.078509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.078832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.078839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.079096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.079530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.079536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.079874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.080064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.080071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.080308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.080451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.080458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.080784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.080984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.539 [2024-06-10 12:09:32.080991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.539 qpair failed and we were unable to recover it. 00:31:38.539 [2024-06-10 12:09:32.081370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.081592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.081599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.081983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.082191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.082197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.082567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.082954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.082960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.083149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.083469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.083476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.083822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.084213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.084220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.084656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.084997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.085004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.085394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.085598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.085606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.085866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.086062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.086070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.086437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.086788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.086794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.087133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.087508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.087515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.087731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.088140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.088147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.088285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.088637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.088644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.088913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.089267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.089277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.089642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.089917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.089923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.089974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.090323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.540 [2024-06-10 12:09:32.090330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.540 qpair failed and we were unable to recover it. 00:31:38.540 [2024-06-10 12:09:32.090716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.091074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.091080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.091425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.091490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.091496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.091830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.092180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.092186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.092537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.092930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.092939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.093354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.093596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.093602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.093997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.094194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.094201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.094541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.094780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.094787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.095139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.095411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.095417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.095680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.096019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.096026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.096391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.096715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.096721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.097106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.097512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.097518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.097867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.098097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.098104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.098464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.098687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.098695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.098930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.099311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.099320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.099621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.099971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.099978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.100023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.100395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.100403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.100839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.101186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.101192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.101552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.101830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.101836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.102182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.102541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.102548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.102746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.102918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.102925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.103308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.103677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.103683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.104025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.104373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.104380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.104626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.104950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.104956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.105017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.105447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.105455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.105797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.106020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.106026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.541 qpair failed and we were unable to recover it. 00:31:38.541 [2024-06-10 12:09:32.106269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.541 [2024-06-10 12:09:32.106631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.106639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.106851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.107038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.107045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.107366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.107559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.107567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.107799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.108033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.108039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.108411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.108762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.108768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.109108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.109413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.109420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.109857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.110209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.110215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.110435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.110692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.110699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.111059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.111288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.111296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.111607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.111955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.111962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.112317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.112661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.112667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.112844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.113282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.113288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.113561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.113754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.113762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.113998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.114360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.114367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.114566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.114944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.114950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.115291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.115665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.115672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.115887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.116256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.116264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.116597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.116955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.116962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.117329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.117714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.117720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.542 qpair failed and we were unable to recover it. 00:31:38.542 [2024-06-10 12:09:32.118094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.118444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.542 [2024-06-10 12:09:32.118451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.118802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.119024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.119031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.119374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.119708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.119715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.119965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.120213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.120220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.120580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.120796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.120802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.121015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.121371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.121378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.121729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.121929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.121936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.122138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.122474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.122481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.122840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.123192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.123198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.123555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.123941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.123948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.124149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.124505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.124512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.124901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.125162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.125168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.125375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.125571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.125578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.125944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.126289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.126296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.126515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.126901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.126909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.127129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.127507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.127515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.127811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.128017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.128024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.128238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.128673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.128680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.129014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.129234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.129246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.129637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.129984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.543 [2024-06-10 12:09:32.129991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.543 qpair failed and we were unable to recover it. 00:31:38.543 [2024-06-10 12:09:32.130402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.130614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.130623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.130989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.131208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.131215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.131432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.131661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.131668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.132018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.132365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.132372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.132721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.132925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.132932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.133115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.133433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.133440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.133786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.134143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.134150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.134508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.134870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.134878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.135231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.135450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.135457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.135804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.136038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.136046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.136412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.136689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.136695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.136903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.137079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.137085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.137512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.137871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.137879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.138230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.138457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.138465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.138878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.139283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.139290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.139693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.139887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.139894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.140133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.140533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.140541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.140879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.141113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.141121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.141481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.141826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.141833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.544 qpair failed and we were unable to recover it. 00:31:38.544 [2024-06-10 12:09:32.142253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.544 [2024-06-10 12:09:32.142551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.142559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.142949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.143293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.143301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.143659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.144032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.144038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.144267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.144622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.144629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.144886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.145236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.145246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.145612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.145969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.145976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.146323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.146591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.146598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.146793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.147040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.147047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.147262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.147601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.147608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.147954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.148321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.148328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.148706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.149072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.149079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.149437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.149788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.149795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.150144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.150479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.150487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.150736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.151075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.151082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.151525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.151723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.151731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.151967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.152374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.152381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.152561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.152894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.152901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.153256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.153599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.153606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.153818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.154041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.154048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.154258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.154668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.154676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.154944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.155171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.155178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.155514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.155876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.155883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.156219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.156425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.545 [2024-06-10 12:09:32.156433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.545 qpair failed and we were unable to recover it. 00:31:38.545 [2024-06-10 12:09:32.156878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.157270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.157277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.157636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.157862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.157869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.158228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.158664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.158671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.159027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.159273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.159280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.159652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.160047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.160055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.160419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.160782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.160788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.161143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.161509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.161517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.161873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.162232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.162238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.162598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.162960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.162967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.163343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.163682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.163688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.164030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.164387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.164394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.164749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.164798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.164803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.165223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.165598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.165605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.165975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.166324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.166331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.166702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.167077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.167083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.167265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.167486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.167493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.167808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.168161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.168167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.168574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.168660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.168666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.169001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.169352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.169359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.169547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.169856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.169862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.170196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.170554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.170561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.170905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.171240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.171258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.171594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.171893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.171899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.172265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.172593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.172600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.172808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.173169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.173176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.173459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.173815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.173821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.174248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.174590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.174597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.174809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.174976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.174991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.546 [2024-06-10 12:09:32.175380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.175570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.546 [2024-06-10 12:09:32.175577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.546 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.175992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.176338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.176344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.176688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.177037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.177044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.177228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.177616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.177622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.177987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.178327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.178333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.178766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.178959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.178966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.179336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.179694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.179700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.179971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.180180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.180186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.180461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.180808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.180814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.181126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.181176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.181183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.181549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.181907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.181913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.182133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.182490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.182496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.182766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.183124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.183130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.183349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.183582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.183589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.183989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.184336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.184343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.184687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.184909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.184915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.185283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.185503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.185510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.185883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.186107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.186113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.186479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.186872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.186879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.187091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.187430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.187437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.187758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.188112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.188118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.188466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.188858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.188864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.189203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.189399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.189406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.189630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.189786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.189792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.190103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.190501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.190507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.190709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.191021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.191027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.191248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.191609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.191615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.191831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.192193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.192200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.192555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.192768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.192775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.193132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.193464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.193470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 12:09:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:38.547 [2024-06-10 12:09:32.193687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 12:09:32 -- common/autotest_common.sh@852 -- # return 0 00:31:38.547 [2024-06-10 12:09:32.193977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.193984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 12:09:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:38.547 [2024-06-10 12:09:32.194344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 12:09:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:38.547 [2024-06-10 12:09:32.194696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 12:09:32 -- common/autotest_common.sh@10 -- # set +x 00:31:38.547 [2024-06-10 12:09:32.194702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.195090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.195446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.195453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.195818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.196172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.196179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.196528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.196734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.196740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.547 [2024-06-10 12:09:32.196947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.197278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.547 [2024-06-10 12:09:32.197285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.547 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.197559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.197780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.197786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.198131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.198489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.198496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.198839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.199187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.199195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.199537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.199930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.199936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.200209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.200404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.200412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.200633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.201004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.201013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.201389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.201764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.201771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.202074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.202435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.202442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.202826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.203170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.203177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.203435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.203758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.203765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.204105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.204414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.204420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.204785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.205009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.205016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.205279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.205637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.205644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.205984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.206358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.206365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.206627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.206868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.206876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.207126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.207379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.207387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.207765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.207971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.207978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.208249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.208456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.208463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.208725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.209114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.209121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.209429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.209729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.209737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.210147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.210497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.210504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.210686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.211042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.211049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.211233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.211497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.211504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.211592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.211895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.211901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.212240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.212602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.212609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.213027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.213240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.213250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.213476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.213892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.213901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.214157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.214348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.214355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.214744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.215097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.215104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.215354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.215685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.215691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.216032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.216405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.216413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.216592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.217022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.217029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.217214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.217548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.217555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.218006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.218222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.218229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.218457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.218778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.218784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.219142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.219403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.219409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.548 qpair failed and we were unable to recover it. 00:31:38.548 [2024-06-10 12:09:32.219762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.548 [2024-06-10 12:09:32.220145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.220152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.220408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.220779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.220786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.221003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.221379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.221385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.221724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.222071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.222078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.222438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.222607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.222621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.223031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.223096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.223101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.223445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.223794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.223800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.224057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.224404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.224411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.224770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.224977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.224983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.225356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.225703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.225710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.226051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.226443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.226450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.226791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.227182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.227189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.227451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.227625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.227632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.228063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.228286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.228292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.228635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.228976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.228983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.229316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.229551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.229558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.229924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.229968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.229974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.230232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.230622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.230629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.230993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.231382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.231389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.231772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.231996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.232003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.232375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.232762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.232768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 12:09:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.549 [2024-06-10 12:09:32.233115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 12:09:32 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:38.549 [2024-06-10 12:09:32.233465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.233474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 12:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.549 [2024-06-10 12:09:32.233700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 12:09:32 -- common/autotest_common.sh@10 -- # set +x 00:31:38.549 [2024-06-10 12:09:32.233996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.234005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.234378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.234753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.234760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.235104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.235369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.235375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.235770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.236028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.236035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.236274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.236659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.236666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.237004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.237350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.237357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.237726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.237947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.237953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.238247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.238602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.238608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.238959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.239304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.239316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.239618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.239832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.239839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.240087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.240473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.240479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.240823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.241166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.241172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.241413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.241810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.549 [2024-06-10 12:09:32.241816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.549 qpair failed and we were unable to recover it. 00:31:38.549 [2024-06-10 12:09:32.242153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.242421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.242428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.242799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.243033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.243040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.243427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.243623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.243630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.243837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.244259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.244266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.244593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.244927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.244934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.245174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.245410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.245417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.245787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.246038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.246045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.246277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.246657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.246664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.247056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.247381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.247388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.247726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.248121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.248128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.248546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.248696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.248702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.249063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.249429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.249436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.249662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.250027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.250035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 Malloc0 00:31:38.550 [2024-06-10 12:09:32.250202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.250411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.250418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.250779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 12:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.550 [2024-06-10 12:09:32.251126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.251134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 12:09:32 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:38.550 [2024-06-10 12:09:32.251207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 12:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.550 [2024-06-10 12:09:32.251570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.251577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 12:09:32 -- common/autotest_common.sh@10 -- # set +x 00:31:38.550 [2024-06-10 12:09:32.251917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.252168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.252174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.252536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.252682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.252690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.252871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.253117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.253124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.253361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.253724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.253730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.254078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.254435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.254442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.254809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.254870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.254876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.255119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.255484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.255491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.255870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.256101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.256107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.256468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.256625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.256632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.256878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.257217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.257223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.550 [2024-06-10 12:09:32.257404] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.550 [2024-06-10 12:09:32.257664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.258067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.550 [2024-06-10 12:09:32.258074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.550 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.258444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.258841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.258847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.259065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.259500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.259507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.259859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.260203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.260209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.260455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.260806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.260813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.261174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.261332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.261339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.261517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.261760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.261767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.262131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.262358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.262364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.262551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.262899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.262905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.263268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.263607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.263614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.263853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.264221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.264227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.264598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.264823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.264830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.265192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.265397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.265404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.265740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.266120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.266126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.266267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 12:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.551 [2024-06-10 12:09:32.266625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.266631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 12:09:32 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:38.551 12:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.551 [2024-06-10 12:09:32.266978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 12:09:32 -- common/autotest_common.sh@10 -- # set +x 00:31:38.551 [2024-06-10 12:09:32.267306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.267313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.267681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.268083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.268090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.268336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.268686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.268692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.268883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.269216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.269222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.269569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.269943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.269949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.270213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.270445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.270452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.270843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.271110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.271117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.271341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.271716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.271723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.272068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.272385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.272392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.272793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.272996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.273002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.273231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.273484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.273491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.273853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.274209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.274216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.274505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.274932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.274938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.275284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.275491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.275498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.275868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.276167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.276173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.276398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.276791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.276798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.277010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.277221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.277229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.277447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.277789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.277796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.278213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 12:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.551 [2024-06-10 12:09:32.278583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.278590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.278774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 12:09:32 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:38.551 [2024-06-10 12:09:32.279026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.279033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 12:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.551 [2024-06-10 12:09:32.279382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 12:09:32 -- common/autotest_common.sh@10 -- # set +x 00:31:38.551 [2024-06-10 12:09:32.279604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.279611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.279892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.280021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.280027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.280403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.280789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.280796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.281142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.281338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.281345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.281680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.281907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.551 [2024-06-10 12:09:32.281913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.551 qpair failed and we were unable to recover it. 00:31:38.551 [2024-06-10 12:09:32.282278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.282646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.282653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.282991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.283206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.283213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.283424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.283784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.283790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.284141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.284507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.284514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.284855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.285077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.285085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.285326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.285681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.285688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.286047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.286282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.286288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.286689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.286899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.286905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.287281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.287656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.287663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.288001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.288328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.288335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.288715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.289077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.289085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.289359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.289713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.552 [2024-06-10 12:09:32.289719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.552 qpair failed and we were unable to recover it. 00:31:38.552 [2024-06-10 12:09:32.290077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.290339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.290347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 12:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.814 [2024-06-10 12:09:32.290712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 12:09:32 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.814 [2024-06-10 12:09:32.291081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.291087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 12:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.814 [2024-06-10 12:09:32.291313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 12:09:32 -- common/autotest_common.sh@10 -- # set +x 00:31:38.814 [2024-06-10 12:09:32.291711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.291717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.292066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.292275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.292281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.292638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.292983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.292990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.293353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.293741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.293748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.293971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.294318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.294324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.294646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.294990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.294996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.295342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.295702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.295709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.296048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.296384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.296391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.296761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.297048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.297054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.297273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.297496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.297502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1be4000b90 with addr=10.0.0.2, port=4420 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.297676] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.814 [2024-06-10 12:09:32.297760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.814 [2024-06-10 12:09:32.299853] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:31:38.814 [2024-06-10 12:09:32.299887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f1be4000b90 (107): Transport endpoint is not connected 00:31:38.814 [2024-06-10 12:09:32.299922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 12:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.814 12:09:32 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:38.814 12:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.814 12:09:32 -- common/autotest_common.sh@10 -- # set +x 00:31:38.814 [2024-06-10 12:09:32.308326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.814 [2024-06-10 12:09:32.308401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.814 [2024-06-10 12:09:32.308415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.814 [2024-06-10 12:09:32.308421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.814 [2024-06-10 12:09:32.308425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.814 [2024-06-10 12:09:32.308438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 12:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.814 12:09:32 -- host/target_disconnect.sh@58 -- # wait 2156759 00:31:38.814 [2024-06-10 12:09:32.318199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.814 [2024-06-10 12:09:32.318265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.814 [2024-06-10 12:09:32.318278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.814 [2024-06-10 12:09:32.318283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.814 [2024-06-10 12:09:32.318287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.814 [2024-06-10 12:09:32.318298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.328179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.814 [2024-06-10 12:09:32.328241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.814 [2024-06-10 12:09:32.328257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.814 [2024-06-10 12:09:32.328262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.814 [2024-06-10 12:09:32.328266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.814 [2024-06-10 12:09:32.328276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.814 qpair failed and we were unable to recover it. 00:31:38.814 [2024-06-10 12:09:32.338203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.814 [2024-06-10 12:09:32.338279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.814 [2024-06-10 12:09:32.338291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.338296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.338300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.338310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.348223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.348282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.348294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.348299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.348303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.348313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.358254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.358310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.358322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.358327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.358331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.358341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.368142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.368237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.368253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.368258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.368262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.368273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.378320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.378386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.378398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.378403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.378410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.378421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.388331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.388391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.388403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.388408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.388412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.388423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.398348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.398410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.398421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.398426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.398430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.398441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.408370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.408430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.408442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.408447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.408451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.408461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.418410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.418479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.418490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.418495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.418499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.418509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.428328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.428390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.428402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.428407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.428411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.428422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.438480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.438540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.438551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.438556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.438560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.438571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.448502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.448573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.448585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.448589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.448594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.448603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.458534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.458598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.458610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.458614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.458619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.458629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.468552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.815 [2024-06-10 12:09:32.468612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.815 [2024-06-10 12:09:32.468624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.815 [2024-06-10 12:09:32.468632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.815 [2024-06-10 12:09:32.468636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.815 [2024-06-10 12:09:32.468646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.815 qpair failed and we were unable to recover it. 00:31:38.815 [2024-06-10 12:09:32.478462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.478528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.478539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.478544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.478548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.478559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.488615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.488708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.488720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.488724] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.488728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.488738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.498668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.498732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.498744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.498748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.498752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.498762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.508561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.508628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.508639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.508644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.508648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.508658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.518702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.518765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.518777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.518781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.518786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.518796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.528612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.528670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.528682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.528686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.528691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.528701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.538769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.538830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.538841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.538846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.538850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.538860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.548799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.548856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.548868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.548873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.548877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.548887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.558869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.558944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.558957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.558967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.558972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.558982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.568837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.568895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.568907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.568912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.568916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.568926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:38.816 [2024-06-10 12:09:32.578853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.816 [2024-06-10 12:09:32.578917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.816 [2024-06-10 12:09:32.578929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.816 [2024-06-10 12:09:32.578934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.816 [2024-06-10 12:09:32.578938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:38.816 [2024-06-10 12:09:32.578948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:38.816 qpair failed and we were unable to recover it. 00:31:39.078 [2024-06-10 12:09:32.588999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.078 [2024-06-10 12:09:32.589065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.078 [2024-06-10 12:09:32.589077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.078 [2024-06-10 12:09:32.589082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.078 [2024-06-10 12:09:32.589086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.078 [2024-06-10 12:09:32.589096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.078 qpair failed and we were unable to recover it. 00:31:39.078 [2024-06-10 12:09:32.598865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.078 [2024-06-10 12:09:32.598926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.078 [2024-06-10 12:09:32.598937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.078 [2024-06-10 12:09:32.598942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.078 [2024-06-10 12:09:32.598947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.078 [2024-06-10 12:09:32.598956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.078 qpair failed and we were unable to recover it. 00:31:39.078 [2024-06-10 12:09:32.609003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.078 [2024-06-10 12:09:32.609062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.078 [2024-06-10 12:09:32.609074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.078 [2024-06-10 12:09:32.609079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.078 [2024-06-10 12:09:32.609083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.078 [2024-06-10 12:09:32.609093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.078 qpair failed and we were unable to recover it. 00:31:39.078 [2024-06-10 12:09:32.619050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.078 [2024-06-10 12:09:32.619115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.078 [2024-06-10 12:09:32.619127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.078 [2024-06-10 12:09:32.619131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.078 [2024-06-10 12:09:32.619136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.078 [2024-06-10 12:09:32.619146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.078 qpair failed and we were unable to recover it. 00:31:39.078 [2024-06-10 12:09:32.629012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.078 [2024-06-10 12:09:32.629066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.078 [2024-06-10 12:09:32.629078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.078 [2024-06-10 12:09:32.629082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.078 [2024-06-10 12:09:32.629087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.078 [2024-06-10 12:09:32.629096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.078 qpair failed and we were unable to recover it. 00:31:39.078 [2024-06-10 12:09:32.639045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.078 [2024-06-10 12:09:32.639103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.079 [2024-06-10 12:09:32.639117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.079 [2024-06-10 12:09:32.639122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.079 [2024-06-10 12:09:32.639126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.079 [2024-06-10 12:09:32.639138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.079 qpair failed and we were unable to recover it. 00:31:39.079 [2024-06-10 12:09:32.649088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.079 [2024-06-10 12:09:32.649237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.079 [2024-06-10 12:09:32.649258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.079 [2024-06-10 12:09:32.649263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.079 [2024-06-10 12:09:32.649268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.079 [2024-06-10 12:09:32.649279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.079 qpair failed and we were unable to recover it. 00:31:39.079 [2024-06-10 12:09:32.658995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.079 [2024-06-10 12:09:32.659133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.079 [2024-06-10 12:09:32.659145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.079 [2024-06-10 12:09:32.659149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.079 [2024-06-10 12:09:32.659154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.079 [2024-06-10 12:09:32.659164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.079 qpair failed and we were unable to recover it. 00:31:39.079 [2024-06-10 12:09:32.669111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.079 [2024-06-10 12:09:32.669173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.079 [2024-06-10 12:09:32.669186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.079 [2024-06-10 12:09:32.669190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.079 [2024-06-10 12:09:32.669194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.079 [2024-06-10 12:09:32.669204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.079 qpair failed and we were unable to recover it. 00:31:39.079 [2024-06-10 12:09:32.679143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.079 [2024-06-10 12:09:32.679200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.079 [2024-06-10 12:09:32.679212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.079 [2024-06-10 12:09:32.679217] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.079 [2024-06-10 12:09:32.679221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.079 [2024-06-10 12:09:32.679231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.079 qpair failed and we were unable to recover it. 00:31:39.079 [2024-06-10 12:09:32.689171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.079 [2024-06-10 12:09:32.689234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.079 [2024-06-10 12:09:32.689250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.079 [2024-06-10 12:09:32.689255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.079 [2024-06-10 12:09:32.689260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.079 [2024-06-10 12:09:32.689274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.079 qpair failed and we were unable to recover it. 00:31:39.079 [2024-06-10 12:09:32.699190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.079 [2024-06-10 12:09:32.699256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.079 [2024-06-10 12:09:32.699268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.079 [2024-06-10 12:09:32.699273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.079 [2024-06-10 12:09:32.699278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.079 [2024-06-10 12:09:32.699288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.079 qpair failed and we were unable to recover it. 00:31:39.079 [2024-06-10 12:09:32.709215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.079 [2024-06-10 12:09:32.709278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.079 [2024-06-10 12:09:32.709290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.079 [2024-06-10 12:09:32.709295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.079 [2024-06-10 12:09:32.709300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.079 [2024-06-10 12:09:32.709310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.079 qpair failed and we were unable to recover it. 00:31:39.079 [2024-06-10 12:09:32.719274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.079 [2024-06-10 12:09:32.719360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.079 [2024-06-10 12:09:32.719372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.079 [2024-06-10 12:09:32.719377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.079 [2024-06-10 12:09:32.719381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.079 [2024-06-10 12:09:32.719392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.079 qpair failed and we were unable to recover it. 00:31:39.079 [2024-06-10 12:09:32.729276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.080 [2024-06-10 12:09:32.729332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.080 [2024-06-10 12:09:32.729344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.080 [2024-06-10 12:09:32.729349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.080 [2024-06-10 12:09:32.729353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.080 [2024-06-10 12:09:32.729363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.080 qpair failed and we were unable to recover it. 00:31:39.080 [2024-06-10 12:09:32.739284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.080 [2024-06-10 12:09:32.739347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.080 [2024-06-10 12:09:32.739362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.080 [2024-06-10 12:09:32.739367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.080 [2024-06-10 12:09:32.739371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.080 [2024-06-10 12:09:32.739381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.080 qpair failed and we were unable to recover it. 00:31:39.080 [2024-06-10 12:09:32.749234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.080 [2024-06-10 12:09:32.749293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.080 [2024-06-10 12:09:32.749306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.080 [2024-06-10 12:09:32.749310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.080 [2024-06-10 12:09:32.749314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.080 [2024-06-10 12:09:32.749325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.080 qpair failed and we were unable to recover it. 00:31:39.080 [2024-06-10 12:09:32.759362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.080 [2024-06-10 12:09:32.759425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.080 [2024-06-10 12:09:32.759437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.080 [2024-06-10 12:09:32.759442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.080 [2024-06-10 12:09:32.759446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.080 [2024-06-10 12:09:32.759457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.080 qpair failed and we were unable to recover it. 00:31:39.080 [2024-06-10 12:09:32.769385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.080 [2024-06-10 12:09:32.769446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.080 [2024-06-10 12:09:32.769458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.080 [2024-06-10 12:09:32.769462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.080 [2024-06-10 12:09:32.769467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.080 [2024-06-10 12:09:32.769477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.080 qpair failed and we were unable to recover it. 00:31:39.080 [2024-06-10 12:09:32.779454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.080 [2024-06-10 12:09:32.779522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.080 [2024-06-10 12:09:32.779533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.080 [2024-06-10 12:09:32.779538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.080 [2024-06-10 12:09:32.779542] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.080 [2024-06-10 12:09:32.779555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.080 qpair failed and we were unable to recover it. 00:31:39.080 [2024-06-10 12:09:32.789447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.080 [2024-06-10 12:09:32.789508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.080 [2024-06-10 12:09:32.789519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.080 [2024-06-10 12:09:32.789524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.080 [2024-06-10 12:09:32.789528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.080 [2024-06-10 12:09:32.789538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.080 qpair failed and we were unable to recover it. 00:31:39.080 [2024-06-10 12:09:32.799357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.080 [2024-06-10 12:09:32.799419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.080 [2024-06-10 12:09:32.799431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.080 [2024-06-10 12:09:32.799436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.080 [2024-06-10 12:09:32.799440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.080 [2024-06-10 12:09:32.799451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.080 qpair failed and we were unable to recover it. 00:31:39.080 [2024-06-10 12:09:32.809414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.080 [2024-06-10 12:09:32.809472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.080 [2024-06-10 12:09:32.809484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.080 [2024-06-10 12:09:32.809489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.080 [2024-06-10 12:09:32.809493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.081 [2024-06-10 12:09:32.809503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.081 qpair failed and we were unable to recover it. 00:31:39.081 [2024-06-10 12:09:32.819529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.081 [2024-06-10 12:09:32.819596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.081 [2024-06-10 12:09:32.819608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.081 [2024-06-10 12:09:32.819613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.081 [2024-06-10 12:09:32.819617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.081 [2024-06-10 12:09:32.819627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.081 qpair failed and we were unable to recover it. 00:31:39.081 [2024-06-10 12:09:32.829542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.081 [2024-06-10 12:09:32.829604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.081 [2024-06-10 12:09:32.829619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.081 [2024-06-10 12:09:32.829624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.081 [2024-06-10 12:09:32.829628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.081 [2024-06-10 12:09:32.829639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.081 qpair failed and we were unable to recover it. 00:31:39.081 [2024-06-10 12:09:32.839604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.081 [2024-06-10 12:09:32.839690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.081 [2024-06-10 12:09:32.839701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.081 [2024-06-10 12:09:32.839706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.081 [2024-06-10 12:09:32.839711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.081 [2024-06-10 12:09:32.839721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.081 qpair failed and we were unable to recover it. 00:31:39.345 [2024-06-10 12:09:32.849668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.345 [2024-06-10 12:09:32.849727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.345 [2024-06-10 12:09:32.849739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.345 [2024-06-10 12:09:32.849744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.345 [2024-06-10 12:09:32.849748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.345 [2024-06-10 12:09:32.849758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.345 qpair failed and we were unable to recover it. 00:31:39.345 [2024-06-10 12:09:32.859535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.345 [2024-06-10 12:09:32.859595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.859608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.859612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.859617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.859627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.869691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.869750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.869762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.869767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.869774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.869784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.879700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.879761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.879774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.879778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.879783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.879793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.889749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.889809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.889821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.889825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.889829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.889839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.899774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.899837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.899848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.899853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.899857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.899867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.909807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.909866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.909877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.909882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.909886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.909896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.919796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.919896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.919908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.919913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.919917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.919927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.929928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.930037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.930055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.930061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.930066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.930079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.939782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.939852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.939871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.939877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.939881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.939895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.949905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.949970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.949989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.949994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.949999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.950012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.959823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.959885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.959898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.959902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.959910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.959921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.969979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.970040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.970052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.970057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.970061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.970071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.980002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.980076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.980095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.346 [2024-06-10 12:09:32.980100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.346 [2024-06-10 12:09:32.980106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.346 [2024-06-10 12:09:32.980119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.346 qpair failed and we were unable to recover it. 00:31:39.346 [2024-06-10 12:09:32.990051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.346 [2024-06-10 12:09:32.990139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.346 [2024-06-10 12:09:32.990152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:32.990157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:32.990161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:32.990172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.000060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.000115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.000127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.000131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.000136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.000146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.010114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.010177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.010190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.010194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.010199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.010209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.020150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.020251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.020264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.020269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.020273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.020284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.030010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.030066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.030078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.030083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.030087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.030097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.040187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.040247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.040259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.040263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.040267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.040277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.050202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.050280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.050291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.050302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.050306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.050316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.060115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.060180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.060191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.060196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.060200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.060210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.070268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.070328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.070339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.070344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.070348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.070358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.080288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.080347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.080359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.080364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.080368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.080378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.090314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.090424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.090437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.090445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.090449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.090460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.100353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.100416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.100428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.100432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.100436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.100447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.347 [2024-06-10 12:09:33.110392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.347 [2024-06-10 12:09:33.110448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.347 [2024-06-10 12:09:33.110460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.347 [2024-06-10 12:09:33.110465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.347 [2024-06-10 12:09:33.110469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.347 [2024-06-10 12:09:33.110479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.347 qpair failed and we were unable to recover it. 00:31:39.609 [2024-06-10 12:09:33.120431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.609 [2024-06-10 12:09:33.120489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.609 [2024-06-10 12:09:33.120501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.609 [2024-06-10 12:09:33.120506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.609 [2024-06-10 12:09:33.120510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.609 [2024-06-10 12:09:33.120521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.609 qpair failed and we were unable to recover it. 00:31:39.609 [2024-06-10 12:09:33.130450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.609 [2024-06-10 12:09:33.130538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.609 [2024-06-10 12:09:33.130549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.130554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.130558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.130569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.140473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.140536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.140551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.140556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.140560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.140570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.150407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.150460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.150472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.150476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.150480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.150491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.160506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.160563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.160574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.160579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.160583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.160593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.170553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.170612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.170623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.170628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.170632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.170642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.180465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.180529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.180542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.180546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.180550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.180561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.190611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.190667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.190679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.190684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.190688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.190698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.200628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.200728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.200739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.200744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.200748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.200758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.210695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.210753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.210764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.210769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.210773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.210783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.220716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.220782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.220794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.220798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.220802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.220812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.230587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.230645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.230659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.230664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.230668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.230678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.240724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.240796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.240808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.240812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.240817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.240826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.250783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.250840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.250852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.250857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.250861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.250871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.610 [2024-06-10 12:09:33.260861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.610 [2024-06-10 12:09:33.260977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.610 [2024-06-10 12:09:33.260988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.610 [2024-06-10 12:09:33.260993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.610 [2024-06-10 12:09:33.260997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.610 [2024-06-10 12:09:33.261007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.610 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.270928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.270984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.270995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.271000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.271004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.271017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.280893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.280956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.280968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.280973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.280977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.280987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.290896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.290956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.290969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.290973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.290977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.290989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.300918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.300981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.300993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.300997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.301001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.301012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.310959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.311117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.311142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.311147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.311152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.311166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.320875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.320935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.320951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.320956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.320960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.320971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.331007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.331067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.331079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.331084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.331088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.331098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.340944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.341010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.341021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.341026] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.341030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.341040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.351089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.351152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.351164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.351169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.351173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.351183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.361117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.361178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.361190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.361195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.361202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.361212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.611 [2024-06-10 12:09:33.371147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.611 [2024-06-10 12:09:33.371245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.611 [2024-06-10 12:09:33.371257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.611 [2024-06-10 12:09:33.371262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.611 [2024-06-10 12:09:33.371266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.611 [2024-06-10 12:09:33.371276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.611 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.381164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.381230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.381246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.381251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.381256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.381266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.391166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.391271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.391283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.391288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.391293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.391303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.401206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.401278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.401289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.401294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.401299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.401309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.411266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.411377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.411390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.411395] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.411399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.411410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.421283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.421344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.421356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.421360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.421365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.421375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.431192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.431254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.431266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.431271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.431275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.431286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.441337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.441396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.441407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.441412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.441416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.441427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.451268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.451329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.451341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.451346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.451353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.451363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.461423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.461503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.461515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.461520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.461524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.461535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.471475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.471537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.471549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.471554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.471558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.471568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.481461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.481525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.481536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.481541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.481545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.481555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.491515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.874 [2024-06-10 12:09:33.491572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.874 [2024-06-10 12:09:33.491584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.874 [2024-06-10 12:09:33.491588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.874 [2024-06-10 12:09:33.491593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.874 [2024-06-10 12:09:33.491603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.874 qpair failed and we were unable to recover it. 00:31:39.874 [2024-06-10 12:09:33.501405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.501465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.501477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.501482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.501486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.501496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.511560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.511621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.511633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.511638] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.511642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.511652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.521587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.521645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.521657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.521662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.521666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.521676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.531626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.531697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.531709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.531714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.531718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.531728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.541522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.541588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.541600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.541607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.541612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.541622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.551671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.551730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.551743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.551748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.551752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.551762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.561575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.561637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.561649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.561654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.561658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.561669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.571703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.571832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.571844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.571850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.571854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.571864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.581760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.581828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.581840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.581845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.581849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.581859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.591793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.591849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.591860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.591866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.591870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.591880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.601819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.601889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.601900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.601905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.601910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.601920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.611861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.611924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.611936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.611941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.611945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.611955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.621890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.621985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.621998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.622003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.622008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.875 [2024-06-10 12:09:33.622019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.875 qpair failed and we were unable to recover it. 00:31:39.875 [2024-06-10 12:09:33.631911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.875 [2024-06-10 12:09:33.631969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.875 [2024-06-10 12:09:33.631980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.875 [2024-06-10 12:09:33.631988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.875 [2024-06-10 12:09:33.631993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.876 [2024-06-10 12:09:33.632003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.876 qpair failed and we were unable to recover it. 00:31:39.876 [2024-06-10 12:09:33.641920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.876 [2024-06-10 12:09:33.641977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.876 [2024-06-10 12:09:33.641989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.876 [2024-06-10 12:09:33.641994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.876 [2024-06-10 12:09:33.641998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:39.876 [2024-06-10 12:09:33.642008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:39.876 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.651946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.138 [2024-06-10 12:09:33.652008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.138 [2024-06-10 12:09:33.652020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.138 [2024-06-10 12:09:33.652025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.138 [2024-06-10 12:09:33.652030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.138 [2024-06-10 12:09:33.652040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.138 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.661977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.138 [2024-06-10 12:09:33.662032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.138 [2024-06-10 12:09:33.662044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.138 [2024-06-10 12:09:33.662049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.138 [2024-06-10 12:09:33.662054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.138 [2024-06-10 12:09:33.662064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.138 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.672024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.138 [2024-06-10 12:09:33.672088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.138 [2024-06-10 12:09:33.672099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.138 [2024-06-10 12:09:33.672105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.138 [2024-06-10 12:09:33.672109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.138 [2024-06-10 12:09:33.672120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.138 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.682015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.138 [2024-06-10 12:09:33.682073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.138 [2024-06-10 12:09:33.682085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.138 [2024-06-10 12:09:33.682092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.138 [2024-06-10 12:09:33.682097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.138 [2024-06-10 12:09:33.682108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.138 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.692126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.138 [2024-06-10 12:09:33.692185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.138 [2024-06-10 12:09:33.692197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.138 [2024-06-10 12:09:33.692202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.138 [2024-06-10 12:09:33.692206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.138 [2024-06-10 12:09:33.692217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.138 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.702122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.138 [2024-06-10 12:09:33.702184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.138 [2024-06-10 12:09:33.702195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.138 [2024-06-10 12:09:33.702200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.138 [2024-06-10 12:09:33.702205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.138 [2024-06-10 12:09:33.702215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.138 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.712001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.138 [2024-06-10 12:09:33.712064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.138 [2024-06-10 12:09:33.712075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.138 [2024-06-10 12:09:33.712080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.138 [2024-06-10 12:09:33.712085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.138 [2024-06-10 12:09:33.712095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.138 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.722146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.138 [2024-06-10 12:09:33.722210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.138 [2024-06-10 12:09:33.722225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.138 [2024-06-10 12:09:33.722230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.138 [2024-06-10 12:09:33.722235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.138 [2024-06-10 12:09:33.722248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.138 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.732061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.138 [2024-06-10 12:09:33.732122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.138 [2024-06-10 12:09:33.732134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.138 [2024-06-10 12:09:33.732139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.138 [2024-06-10 12:09:33.732143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.138 [2024-06-10 12:09:33.732153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.138 qpair failed and we were unable to recover it. 00:31:40.138 [2024-06-10 12:09:33.742257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.742326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.742338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.742343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.742348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.742359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.752214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.752274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.752286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.752291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.752296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.752306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.762146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.762205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.762217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.762222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.762226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.762246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.772177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.772233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.772248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.772253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.772258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.772268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.782368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.782439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.782451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.782456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.782461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.782472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.792334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.792405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.792417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.792422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.792426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.792437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.802446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.802546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.802558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.802563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.802568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.802578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.812288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.812348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.812362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.812368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.812372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.812382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.822459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.822522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.822534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.822539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.822544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.822554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.832444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.832546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.832558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.832562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.832568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.832578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.842383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.842473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.842486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.842491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.842495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.842506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.852534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.852592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.852605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.852610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.852615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.852628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.862568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.862635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.862647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.139 [2024-06-10 12:09:33.862652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.139 [2024-06-10 12:09:33.862656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.139 [2024-06-10 12:09:33.862666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.139 qpair failed and we were unable to recover it. 00:31:40.139 [2024-06-10 12:09:33.872453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.139 [2024-06-10 12:09:33.872513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.139 [2024-06-10 12:09:33.872525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.140 [2024-06-10 12:09:33.872530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.140 [2024-06-10 12:09:33.872535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.140 [2024-06-10 12:09:33.872545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.140 qpair failed and we were unable to recover it. 00:31:40.140 [2024-06-10 12:09:33.882583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.140 [2024-06-10 12:09:33.882689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.140 [2024-06-10 12:09:33.882701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.140 [2024-06-10 12:09:33.882706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.140 [2024-06-10 12:09:33.882711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.140 [2024-06-10 12:09:33.882721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.140 qpair failed and we were unable to recover it. 00:31:40.140 [2024-06-10 12:09:33.892654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.140 [2024-06-10 12:09:33.892717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.140 [2024-06-10 12:09:33.892732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.140 [2024-06-10 12:09:33.892738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.140 [2024-06-10 12:09:33.892742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.140 [2024-06-10 12:09:33.892755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.140 qpair failed and we were unable to recover it. 00:31:40.140 [2024-06-10 12:09:33.902658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.140 [2024-06-10 12:09:33.902746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.140 [2024-06-10 12:09:33.902759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.140 [2024-06-10 12:09:33.902764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.140 [2024-06-10 12:09:33.902769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.140 [2024-06-10 12:09:33.902781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.140 qpair failed and we were unable to recover it. 00:31:40.402 [2024-06-10 12:09:33.912718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.402 [2024-06-10 12:09:33.912781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.402 [2024-06-10 12:09:33.912793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.402 [2024-06-10 12:09:33.912798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.402 [2024-06-10 12:09:33.912803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.402 [2024-06-10 12:09:33.912813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.402 qpair failed and we were unable to recover it. 00:31:40.402 [2024-06-10 12:09:33.922712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.402 [2024-06-10 12:09:33.922769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.402 [2024-06-10 12:09:33.922781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.402 [2024-06-10 12:09:33.922786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.402 [2024-06-10 12:09:33.922791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.402 [2024-06-10 12:09:33.922801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.402 qpair failed and we were unable to recover it. 00:31:40.402 [2024-06-10 12:09:33.932764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.402 [2024-06-10 12:09:33.932825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.402 [2024-06-10 12:09:33.932837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.402 [2024-06-10 12:09:33.932842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.402 [2024-06-10 12:09:33.932847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.402 [2024-06-10 12:09:33.932857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.402 qpair failed and we were unable to recover it. 00:31:40.402 [2024-06-10 12:09:33.942743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.402 [2024-06-10 12:09:33.942822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.402 [2024-06-10 12:09:33.942834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.402 [2024-06-10 12:09:33.942839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.402 [2024-06-10 12:09:33.942847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.402 [2024-06-10 12:09:33.942858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.402 qpair failed and we were unable to recover it. 00:31:40.402 [2024-06-10 12:09:33.952799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.402 [2024-06-10 12:09:33.952859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.402 [2024-06-10 12:09:33.952871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.402 [2024-06-10 12:09:33.952877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.402 [2024-06-10 12:09:33.952881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.402 [2024-06-10 12:09:33.952891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.402 qpair failed and we were unable to recover it. 00:31:40.402 [2024-06-10 12:09:33.962726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:33.962788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:33.962799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:33.962804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:33.962809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:33.962819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:33.972891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:33.972964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:33.972976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:33.972981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:33.972986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:33.972996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:33.982947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:33.983048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:33.983067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:33.983073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:33.983078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:33.983093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:33.992921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:33.992993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:33.993012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:33.993018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:33.993023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:33.993038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.002957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.003021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.003040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.003046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.003051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:34.003065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.012994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.013061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.013080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.013087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.013092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:34.013106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.023000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.023066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.023079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.023084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.023089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:34.023100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.033049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.033110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.033122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.033131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.033136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:34.033146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.043061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.043154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.043166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.043171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.043176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:34.043186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.053089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.053199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.053211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.053216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.053221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:34.053231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.063132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.063194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.063205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.063210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.063215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:34.063225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.073152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.073233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.073249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.073255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.073259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:34.073269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.083051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.083108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.083120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.083126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.083131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.403 [2024-06-10 12:09:34.083142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.403 qpair failed and we were unable to recover it. 00:31:40.403 [2024-06-10 12:09:34.093209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.403 [2024-06-10 12:09:34.093270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.403 [2024-06-10 12:09:34.093282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.403 [2024-06-10 12:09:34.093287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.403 [2024-06-10 12:09:34.093292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.404 [2024-06-10 12:09:34.093303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.404 qpair failed and we were unable to recover it. 00:31:40.404 [2024-06-10 12:09:34.103114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.404 [2024-06-10 12:09:34.103180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.404 [2024-06-10 12:09:34.103192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.404 [2024-06-10 12:09:34.103197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.404 [2024-06-10 12:09:34.103202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.404 [2024-06-10 12:09:34.103212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.404 qpair failed and we were unable to recover it. 00:31:40.404 [2024-06-10 12:09:34.113261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.404 [2024-06-10 12:09:34.113318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.404 [2024-06-10 12:09:34.113330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.404 [2024-06-10 12:09:34.113335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.404 [2024-06-10 12:09:34.113339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.404 [2024-06-10 12:09:34.113350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.404 qpair failed and we were unable to recover it. 00:31:40.404 [2024-06-10 12:09:34.123283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.404 [2024-06-10 12:09:34.123340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.404 [2024-06-10 12:09:34.123352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.404 [2024-06-10 12:09:34.123360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.404 [2024-06-10 12:09:34.123365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.404 [2024-06-10 12:09:34.123376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.404 qpair failed and we were unable to recover it. 00:31:40.404 [2024-06-10 12:09:34.133305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.404 [2024-06-10 12:09:34.133369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.404 [2024-06-10 12:09:34.133381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.404 [2024-06-10 12:09:34.133386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.404 [2024-06-10 12:09:34.133391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.404 [2024-06-10 12:09:34.133402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.404 qpair failed and we were unable to recover it. 00:31:40.404 [2024-06-10 12:09:34.143393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.404 [2024-06-10 12:09:34.143472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.404 [2024-06-10 12:09:34.143483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.404 [2024-06-10 12:09:34.143488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.404 [2024-06-10 12:09:34.143493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.404 [2024-06-10 12:09:34.143504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.404 qpair failed and we were unable to recover it. 00:31:40.404 [2024-06-10 12:09:34.153383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.404 [2024-06-10 12:09:34.153443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.404 [2024-06-10 12:09:34.153455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.404 [2024-06-10 12:09:34.153460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.404 [2024-06-10 12:09:34.153464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.404 [2024-06-10 12:09:34.153475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.404 qpair failed and we were unable to recover it. 00:31:40.404 [2024-06-10 12:09:34.163399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.404 [2024-06-10 12:09:34.163456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.404 [2024-06-10 12:09:34.163467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.404 [2024-06-10 12:09:34.163473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.404 [2024-06-10 12:09:34.163477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.404 [2024-06-10 12:09:34.163488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.404 qpair failed and we were unable to recover it. 00:31:40.666 [2024-06-10 12:09:34.173434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.666 [2024-06-10 12:09:34.173491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.666 [2024-06-10 12:09:34.173503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.666 [2024-06-10 12:09:34.173508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.666 [2024-06-10 12:09:34.173513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.666 [2024-06-10 12:09:34.173523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.666 qpair failed and we were unable to recover it. 00:31:40.666 [2024-06-10 12:09:34.183487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.666 [2024-06-10 12:09:34.183552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.666 [2024-06-10 12:09:34.183564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.666 [2024-06-10 12:09:34.183569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.666 [2024-06-10 12:09:34.183574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.666 [2024-06-10 12:09:34.183585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.666 qpair failed and we were unable to recover it. 00:31:40.666 [2024-06-10 12:09:34.193476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.666 [2024-06-10 12:09:34.193535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.666 [2024-06-10 12:09:34.193546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.666 [2024-06-10 12:09:34.193551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.666 [2024-06-10 12:09:34.193556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.666 [2024-06-10 12:09:34.193566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.666 qpair failed and we were unable to recover it. 00:31:40.666 [2024-06-10 12:09:34.203546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.666 [2024-06-10 12:09:34.203648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.666 [2024-06-10 12:09:34.203659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.203664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.203669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.203679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.213561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.213645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.213660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.213665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.213670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.213681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.223594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.223654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.223666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.223671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.223675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.223686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.233558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.233613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.233625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.233630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.233634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.233644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.243633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.243693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.243704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.243710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.243714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.243724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.253676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.253743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.253755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.253760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.253764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.253778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.263729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.263792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.263804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.263809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.263814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.263824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.273638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.273737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.273749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.273755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.273759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.273770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.283760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.283822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.283834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.283839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.283844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.283854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.293796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.293927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.293939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.293945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.293949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.293959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.303814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.303895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.303909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.303915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.303919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.303930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.313732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.313789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.313802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.313807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.313811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.313822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.323856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.323919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.667 [2024-06-10 12:09:34.323931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.667 [2024-06-10 12:09:34.323936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.667 [2024-06-10 12:09:34.323941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.667 [2024-06-10 12:09:34.323951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.667 qpair failed and we were unable to recover it. 00:31:40.667 [2024-06-10 12:09:34.333825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.667 [2024-06-10 12:09:34.333885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.333897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.333902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.333906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.333916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.343934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.344000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.344013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.344018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.344023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.344037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.353955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.354012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.354024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.354029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.354034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.354044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.363970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.364067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.364079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.364084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.364089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.364099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.373897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.373991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.374003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.374008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.374013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.374024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.384107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.384214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.384226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.384232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.384237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.384254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.394050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.394110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.394125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.394130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.394135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.394145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.404081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.404179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.404191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.404196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.404200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.404211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.414129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.414190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.414202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.414207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.414211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.414221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.424162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.424232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.424248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.424253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.424257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.424268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.668 [2024-06-10 12:09:34.434156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.668 [2024-06-10 12:09:34.434211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.668 [2024-06-10 12:09:34.434223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.668 [2024-06-10 12:09:34.434228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.668 [2024-06-10 12:09:34.434235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.668 [2024-06-10 12:09:34.434250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.668 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.444117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.930 [2024-06-10 12:09:34.444175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.930 [2024-06-10 12:09:34.444187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.930 [2024-06-10 12:09:34.444192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.930 [2024-06-10 12:09:34.444196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.930 [2024-06-10 12:09:34.444207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.930 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.454246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.930 [2024-06-10 12:09:34.454309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.930 [2024-06-10 12:09:34.454322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.930 [2024-06-10 12:09:34.454327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.930 [2024-06-10 12:09:34.454331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.930 [2024-06-10 12:09:34.454342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.930 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.464302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.930 [2024-06-10 12:09:34.464415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.930 [2024-06-10 12:09:34.464427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.930 [2024-06-10 12:09:34.464432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.930 [2024-06-10 12:09:34.464437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.930 [2024-06-10 12:09:34.464448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.930 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.474166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.930 [2024-06-10 12:09:34.474224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.930 [2024-06-10 12:09:34.474236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.930 [2024-06-10 12:09:34.474246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.930 [2024-06-10 12:09:34.474251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.930 [2024-06-10 12:09:34.474261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.930 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.484232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.930 [2024-06-10 12:09:34.484304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.930 [2024-06-10 12:09:34.484316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.930 [2024-06-10 12:09:34.484321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.930 [2024-06-10 12:09:34.484325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.930 [2024-06-10 12:09:34.484336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.930 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.494348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.930 [2024-06-10 12:09:34.494422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.930 [2024-06-10 12:09:34.494434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.930 [2024-06-10 12:09:34.494439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.930 [2024-06-10 12:09:34.494443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.930 [2024-06-10 12:09:34.494454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.930 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.504369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.930 [2024-06-10 12:09:34.504458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.930 [2024-06-10 12:09:34.504469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.930 [2024-06-10 12:09:34.504474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.930 [2024-06-10 12:09:34.504480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.930 [2024-06-10 12:09:34.504490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.930 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.514419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.930 [2024-06-10 12:09:34.514478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.930 [2024-06-10 12:09:34.514490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.930 [2024-06-10 12:09:34.514495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.930 [2024-06-10 12:09:34.514500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.930 [2024-06-10 12:09:34.514510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.930 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.524466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.930 [2024-06-10 12:09:34.524529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.930 [2024-06-10 12:09:34.524541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.930 [2024-06-10 12:09:34.524546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.930 [2024-06-10 12:09:34.524553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.930 [2024-06-10 12:09:34.524565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.930 qpair failed and we were unable to recover it. 00:31:40.930 [2024-06-10 12:09:34.534474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.534535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.534547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.534552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.534557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.534567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.544487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.544548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.544560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.544565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.544569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.544580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.554520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.554581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.554592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.554598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.554602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.554613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.564428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.564491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.564503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.564508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.564512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.564522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.574584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.574643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.574655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.574660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.574664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.574675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.584650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.584715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.584727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.584732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.584737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.584747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.594729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.594797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.594808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.594814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.594818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.594828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.604697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.604756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.604767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.604772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.604777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.604787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.614737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.614797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.614809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.614817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.614822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.614832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.624783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.624855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.624867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.624872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.624877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.624887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.634780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.634853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.634864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.634869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.634874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.634884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.644792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.644890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.644901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.644906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.644911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.644921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.654690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.654750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.654762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.654767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.931 [2024-06-10 12:09:34.654772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.931 [2024-06-10 12:09:34.654783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.931 qpair failed and we were unable to recover it. 00:31:40.931 [2024-06-10 12:09:34.664822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.931 [2024-06-10 12:09:34.664893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.931 [2024-06-10 12:09:34.664905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.931 [2024-06-10 12:09:34.664910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.932 [2024-06-10 12:09:34.664915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.932 [2024-06-10 12:09:34.664925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.932 qpair failed and we were unable to recover it. 00:31:40.932 [2024-06-10 12:09:34.674833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.932 [2024-06-10 12:09:34.674900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.932 [2024-06-10 12:09:34.674919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.932 [2024-06-10 12:09:34.674925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.932 [2024-06-10 12:09:34.674931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.932 [2024-06-10 12:09:34.674944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.932 qpair failed and we were unable to recover it. 00:31:40.932 [2024-06-10 12:09:34.684805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.932 [2024-06-10 12:09:34.684892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.932 [2024-06-10 12:09:34.684907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.932 [2024-06-10 12:09:34.684913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.932 [2024-06-10 12:09:34.684917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.932 [2024-06-10 12:09:34.684929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.932 qpair failed and we were unable to recover it. 00:31:40.932 [2024-06-10 12:09:34.694915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.932 [2024-06-10 12:09:34.694975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.932 [2024-06-10 12:09:34.694987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.932 [2024-06-10 12:09:34.694992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.932 [2024-06-10 12:09:34.694997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:40.932 [2024-06-10 12:09:34.695008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:40.932 qpair failed and we were unable to recover it. 00:31:41.193 [2024-06-10 12:09:34.704815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.193 [2024-06-10 12:09:34.704884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.193 [2024-06-10 12:09:34.704900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.193 [2024-06-10 12:09:34.704906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.193 [2024-06-10 12:09:34.704911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.193 [2024-06-10 12:09:34.704924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.193 qpair failed and we were unable to recover it. 00:31:41.193 [2024-06-10 12:09:34.714969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.193 [2024-06-10 12:09:34.715024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.193 [2024-06-10 12:09:34.715037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.193 [2024-06-10 12:09:34.715042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.193 [2024-06-10 12:09:34.715046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.193 [2024-06-10 12:09:34.715057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.193 qpair failed and we were unable to recover it. 00:31:41.193 [2024-06-10 12:09:34.724990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.193 [2024-06-10 12:09:34.725051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.193 [2024-06-10 12:09:34.725063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.193 [2024-06-10 12:09:34.725068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.193 [2024-06-10 12:09:34.725072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.193 [2024-06-10 12:09:34.725083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.193 qpair failed and we were unable to recover it. 00:31:41.193 [2024-06-10 12:09:34.734985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.193 [2024-06-10 12:09:34.735049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.193 [2024-06-10 12:09:34.735061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.193 [2024-06-10 12:09:34.735066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.193 [2024-06-10 12:09:34.735070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.193 [2024-06-10 12:09:34.735080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.193 qpair failed and we were unable to recover it. 00:31:41.193 [2024-06-10 12:09:34.745104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.193 [2024-06-10 12:09:34.745206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.193 [2024-06-10 12:09:34.745218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.193 [2024-06-10 12:09:34.745223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.193 [2024-06-10 12:09:34.745228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.193 [2024-06-10 12:09:34.745240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.193 qpair failed and we were unable to recover it. 00:31:41.193 [2024-06-10 12:09:34.755090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.193 [2024-06-10 12:09:34.755150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.193 [2024-06-10 12:09:34.755162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.193 [2024-06-10 12:09:34.755166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.193 [2024-06-10 12:09:34.755171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.193 [2024-06-10 12:09:34.755181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.193 qpair failed and we were unable to recover it. 00:31:41.193 [2024-06-10 12:09:34.765107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.193 [2024-06-10 12:09:34.765164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.765176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.765180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.765185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.765195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.775169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.775272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.775284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.775289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.775293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.775304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.785191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.785257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.785269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.785274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.785278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.785289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.795209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.795303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.795318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.795323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.795328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.795338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.805264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.805364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.805375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.805381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.805386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.805397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.815271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.815359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.815371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.815376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.815380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.815390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.825306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.825372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.825384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.825389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.825393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.825404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.835329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.835389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.835400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.835405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.835410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.835423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.845343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.845432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.845443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.845448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.845453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.845464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.855374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.855437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.855448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.855453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.855458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.855468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.865391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.865460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.865472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.865477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.865481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.865491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.875441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.875529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.875541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.875546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.875551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.875561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.885378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.885449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.885464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.885469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.885473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.885483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.194 qpair failed and we were unable to recover it. 00:31:41.194 [2024-06-10 12:09:34.895519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.194 [2024-06-10 12:09:34.895591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.194 [2024-06-10 12:09:34.895602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.194 [2024-06-10 12:09:34.895608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.194 [2024-06-10 12:09:34.895612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.194 [2024-06-10 12:09:34.895622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.195 qpair failed and we were unable to recover it. 00:31:41.195 [2024-06-10 12:09:34.905415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.195 [2024-06-10 12:09:34.905478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.195 [2024-06-10 12:09:34.905490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.195 [2024-06-10 12:09:34.905495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.195 [2024-06-10 12:09:34.905499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.195 [2024-06-10 12:09:34.905510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.195 qpair failed and we were unable to recover it. 00:31:41.195 [2024-06-10 12:09:34.915554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.195 [2024-06-10 12:09:34.915617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.195 [2024-06-10 12:09:34.915628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.195 [2024-06-10 12:09:34.915633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.195 [2024-06-10 12:09:34.915638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.195 [2024-06-10 12:09:34.915648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.195 qpair failed and we were unable to recover it. 00:31:41.195 [2024-06-10 12:09:34.925577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.195 [2024-06-10 12:09:34.925637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.195 [2024-06-10 12:09:34.925649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.195 [2024-06-10 12:09:34.925653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.195 [2024-06-10 12:09:34.925661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.195 [2024-06-10 12:09:34.925671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.195 qpair failed and we were unable to recover it. 00:31:41.195 [2024-06-10 12:09:34.935504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.195 [2024-06-10 12:09:34.935575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.195 [2024-06-10 12:09:34.935587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.195 [2024-06-10 12:09:34.935593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.195 [2024-06-10 12:09:34.935597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.195 [2024-06-10 12:09:34.935608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.195 qpair failed and we were unable to recover it. 00:31:41.195 [2024-06-10 12:09:34.945652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.195 [2024-06-10 12:09:34.945742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.195 [2024-06-10 12:09:34.945754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.195 [2024-06-10 12:09:34.945759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.195 [2024-06-10 12:09:34.945764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.195 [2024-06-10 12:09:34.945774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.195 qpair failed and we were unable to recover it. 00:31:41.195 [2024-06-10 12:09:34.955678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.195 [2024-06-10 12:09:34.955743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.195 [2024-06-10 12:09:34.955755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.195 [2024-06-10 12:09:34.955761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.195 [2024-06-10 12:09:34.955765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.195 [2024-06-10 12:09:34.955775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.195 qpair failed and we were unable to recover it. 00:31:41.457 [2024-06-10 12:09:34.965705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.457 [2024-06-10 12:09:34.965759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.457 [2024-06-10 12:09:34.965771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.457 [2024-06-10 12:09:34.965776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.457 [2024-06-10 12:09:34.965781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.457 [2024-06-10 12:09:34.965791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.457 qpair failed and we were unable to recover it. 00:31:41.457 [2024-06-10 12:09:34.975736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.457 [2024-06-10 12:09:34.975799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.457 [2024-06-10 12:09:34.975811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.457 [2024-06-10 12:09:34.975816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.457 [2024-06-10 12:09:34.975821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.457 [2024-06-10 12:09:34.975831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.457 qpair failed and we were unable to recover it. 00:31:41.457 [2024-06-10 12:09:34.985774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.457 [2024-06-10 12:09:34.985835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.457 [2024-06-10 12:09:34.985847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.457 [2024-06-10 12:09:34.985852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.457 [2024-06-10 12:09:34.985857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.457 [2024-06-10 12:09:34.985867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.457 qpair failed and we were unable to recover it. 00:31:41.457 [2024-06-10 12:09:34.995797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.457 [2024-06-10 12:09:34.995850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.457 [2024-06-10 12:09:34.995861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.457 [2024-06-10 12:09:34.995867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.457 [2024-06-10 12:09:34.995871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.457 [2024-06-10 12:09:34.995882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.457 qpair failed and we were unable to recover it. 00:31:41.457 [2024-06-10 12:09:35.005706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.005772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.005784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.005789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.005794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.005804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.015738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.015803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.015815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.015820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.015827] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.015838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.025889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.025953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.025964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.025969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.025974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.025984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.035779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.035840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.035852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.035857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.035861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.035871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.046019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.046119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.046130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.046135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.046140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.046150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.056073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.056132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.056144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.056149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.056153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.056163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.066009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.066070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.066082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.066087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.066091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.066101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.076049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.076139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.076150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.076155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.076160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.076170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.086050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.086108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.086120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.086125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.086129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.086139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.096085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.096145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.096156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.096162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.096166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.096176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.106099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.106161] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.106173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.106181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.106185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.106195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.116123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.116185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.116196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.116201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.116206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.116216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.126170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.126233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.126248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.126253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.126257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.126268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.136094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.136154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.136169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.136175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.136179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.136196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.146055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.146131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.146144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.146149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.146153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.146164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.156222] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.156281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.156293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.156298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.156303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.156313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.166273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.166329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.166341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.166346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.166351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.166362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.176273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.176325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.176337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.176342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.176347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.176357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.186244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.186299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.186310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.186315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.186320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.186330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.196406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.196463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.196474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.196482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.196486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.196497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.206417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.206494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.206505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.206510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.206514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.206525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.216449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.216571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.216583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.216588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.216592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.458 [2024-06-10 12:09:35.216602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.458 qpair failed and we were unable to recover it. 00:31:41.458 [2024-06-10 12:09:35.226268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.458 [2024-06-10 12:09:35.226324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.458 [2024-06-10 12:09:35.226336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.458 [2024-06-10 12:09:35.226341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.458 [2024-06-10 12:09:35.226346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.459 [2024-06-10 12:09:35.226357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.459 qpair failed and we were unable to recover it. 00:31:41.719 [2024-06-10 12:09:35.236459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.719 [2024-06-10 12:09:35.236513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.719 [2024-06-10 12:09:35.236525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.236530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.236535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.236545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.246355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.246415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.246427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.246432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.246437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.246447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.256527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.256623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.256634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.256639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.256644] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.256654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.266548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.266604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.266615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.266620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.266625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.266635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.276564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.276621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.276632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.276637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.276642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.276652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.286620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.286703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.286722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.286728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.286732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.286742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.296616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.296668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.296679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.296685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.296689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.296699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.306630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.306691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.306702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.306707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.306712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.306722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.316654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.316705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.316717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.316722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.316726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.316736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.326730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.326805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.326816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.326821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.326825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.326838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.336743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.336829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.336840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.336846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.336850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.336860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.346848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.346903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.346915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.346920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.346924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.346935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.356636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.356688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.356701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.356706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.356711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.720 [2024-06-10 12:09:35.356721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-06-10 12:09:35.366824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-06-10 12:09:35.366882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-06-10 12:09:35.366894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-06-10 12:09:35.366899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-06-10 12:09:35.366903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.366914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.376820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.376883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.376898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.376903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.376907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.376918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.386844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.386904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.386923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.386929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.386933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.386947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.396876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.396929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.396943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.396948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.396953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.396964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.406814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.406876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.406888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.406893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.406897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.406908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.416934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.416993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.417011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.417018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.417026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.417039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.426853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.426915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.426934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.426940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.426944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.426957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.436857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.436905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.436918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.436923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.436928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.436939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.447091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.447154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.447173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.447178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.447183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.447197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.456915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.456973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.456987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.456992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.456997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.457008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.467101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.467165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.467177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.467182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.467187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.467198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.476970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.477046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.477058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.477063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.477069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.477080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-06-10 12:09:35.487161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-06-10 12:09:35.487221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-06-10 12:09:35.487233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-06-10 12:09:35.487238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-06-10 12:09:35.487247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.721 [2024-06-10 12:09:35.487259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.983 [2024-06-10 12:09:35.497161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.983 [2024-06-10 12:09:35.497216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.983 [2024-06-10 12:09:35.497228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.983 [2024-06-10 12:09:35.497233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.983 [2024-06-10 12:09:35.497237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.983 [2024-06-10 12:09:35.497251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.983 qpair failed and we were unable to recover it. 00:31:41.983 [2024-06-10 12:09:35.507196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.983 [2024-06-10 12:09:35.507251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.983 [2024-06-10 12:09:35.507263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.983 [2024-06-10 12:09:35.507268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.983 [2024-06-10 12:09:35.507277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.983 [2024-06-10 12:09:35.507288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.983 qpair failed and we were unable to recover it. 00:31:41.983 [2024-06-10 12:09:35.517082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.517134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.517146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.517151] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.517155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.517166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.527282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.527352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.527365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.527370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.527374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.527385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.537265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.537354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.537365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.537371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.537375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.537386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.547160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.547210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.547222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.547228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.547232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.547249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.557306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.557362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.557374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.557379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.557384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.557394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.567366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.567419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.567431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.567436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.567441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.567451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.577357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.577409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.577421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.577426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.577430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.577441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.587267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.587322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.587334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.587339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.587344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.587354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.597289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.597349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.597361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.597369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.597374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.597385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.607476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.607533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.607544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.607550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.607554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.607565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.617352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.617405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.617417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.617422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.617427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.617437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.627497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.627581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.627593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.627598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.627604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.627615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.637518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.637576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.637589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.637594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.984 [2024-06-10 12:09:35.637598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.984 [2024-06-10 12:09:35.637609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.984 qpair failed and we were unable to recover it. 00:31:41.984 [2024-06-10 12:09:35.647590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.984 [2024-06-10 12:09:35.647645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.984 [2024-06-10 12:09:35.647657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.984 [2024-06-10 12:09:35.647662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.647667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.647677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.657634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.657686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.657698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.657703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.657708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.657718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.667614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.667668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.667681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.667686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.667691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.667702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.677632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.677685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.677698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.677703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.677708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.677718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.687681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.687741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.687752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.687760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.687765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.687775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.697702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.697757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.697769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.697774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.697779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.697790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.707728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.707785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.707797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.707802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.707807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.707818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.717744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.717834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.717846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.717851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.717856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.717866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.727682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.727745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.727757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.727762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.727766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.727777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.737782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.737833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.737846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.737851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.737856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.737866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:41.985 [2024-06-10 12:09:35.747867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.985 [2024-06-10 12:09:35.747941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.985 [2024-06-10 12:09:35.747953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.985 [2024-06-10 12:09:35.747958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.985 [2024-06-10 12:09:35.747962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:41.985 [2024-06-10 12:09:35.747973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.985 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.757851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.757910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.757929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.757935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.757940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.757954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.767976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.768088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.768107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.768113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.768118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.768132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.777908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.777962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.777979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.777984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.777989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.778001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.787927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.787985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.787998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.788003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.788008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.788019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.797958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.798010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.798022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.798028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.798033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.798043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.808018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.808075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.808087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.808092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.808097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.808108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.818019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.818120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.818132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.818137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.818142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.818155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.828034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.828090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.828101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.828106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.828110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.828121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.838060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.838112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.838124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.838129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.838134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.838144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.848117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.848175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.848186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.848192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.848196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.848207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.858122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.858175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.858187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.858192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.858197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.858207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.868152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.868208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.868223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.868228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.868233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.868247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.247 [2024-06-10 12:09:35.878166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.247 [2024-06-10 12:09:35.878219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.247 [2024-06-10 12:09:35.878231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.247 [2024-06-10 12:09:35.878236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.247 [2024-06-10 12:09:35.878240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.247 [2024-06-10 12:09:35.878254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.247 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.888115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.888174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.888185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.888190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.888195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.888205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.898324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.898380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.898391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.898397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.898401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.898412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.908250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.908309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.908321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.908326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.908330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.908344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.918278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.918335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.918347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.918352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.918357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.918367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.928332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.928382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.928394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.928399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.928404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.928415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.938354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.938406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.938418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.938423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.938428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.938438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.948254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.948310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.948321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.948327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.948331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.948343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.958402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.958459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.958473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.958478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.958483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.958493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.968448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.968498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.968510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.968515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.968520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.968530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.978334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.978382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.978394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.978400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.978404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.978414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.988428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.988480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.988492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.988497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.988502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.988513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:35.998539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:35.998591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:35.998603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:35.998608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:35.998615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:35.998625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.248 [2024-06-10 12:09:36.008506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.248 [2024-06-10 12:09:36.008557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.248 [2024-06-10 12:09:36.008569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.248 [2024-06-10 12:09:36.008574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.248 [2024-06-10 12:09:36.008579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.248 [2024-06-10 12:09:36.008589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.248 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.018577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.018631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.018643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.018648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.018653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.018663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.028489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.028546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.028558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.028563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.028568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.028578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.038637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.038695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.038707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.038712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.038716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.038726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.048671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.048728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.048740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.048745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.048750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.048761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.058693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.058750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.058761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.058767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.058772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.058782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.068706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.068763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.068774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.068780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.068784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.068794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.078732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.078787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.078800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.078805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.078810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.078820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.088776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.088830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.088842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.088853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.088858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.088868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.098801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.098850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.098862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.098868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.098872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.098882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.108724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.108784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.108796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.108802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.108806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.108817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.118848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.118899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.118911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.118916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.118921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.118931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.128894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.128949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.128961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.128966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.128970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.128980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.138924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.139003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.139015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.139020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.139025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.139036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.148941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.149001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.149013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.149018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.149023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.149033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.158941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.158992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.159004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.159009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.159014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.159024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.168992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.169048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.169060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.169068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.169072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.169083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.179072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.179122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.179134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.179142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.179147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.179158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.189048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.189110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.189122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.189127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.189132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.189142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.199105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.199157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.199169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.199174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.199178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.199188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.209093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.209147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.209159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.209164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.209169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.209179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.219123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.219176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.219188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.219193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.219198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.219208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.229135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.229230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.229245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.229251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.229255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.229266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-06-10 12:09:36.239181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.509 [2024-06-10 12:09:36.239246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.509 [2024-06-10 12:09:36.239258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.509 [2024-06-10 12:09:36.239264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.509 [2024-06-10 12:09:36.239268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.509 [2024-06-10 12:09:36.239278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.510 [2024-06-10 12:09:36.249210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.510 [2024-06-10 12:09:36.249264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.510 [2024-06-10 12:09:36.249276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.510 [2024-06-10 12:09:36.249281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.510 [2024-06-10 12:09:36.249286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.510 [2024-06-10 12:09:36.249296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-06-10 12:09:36.259230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.510 [2024-06-10 12:09:36.259285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.510 [2024-06-10 12:09:36.259296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.510 [2024-06-10 12:09:36.259302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.510 [2024-06-10 12:09:36.259306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.510 [2024-06-10 12:09:36.259317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-06-10 12:09:36.269247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.510 [2024-06-10 12:09:36.269304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.510 [2024-06-10 12:09:36.269319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.510 [2024-06-10 12:09:36.269324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.510 [2024-06-10 12:09:36.269329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.510 [2024-06-10 12:09:36.269339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-06-10 12:09:36.279346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.510 [2024-06-10 12:09:36.279407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.510 [2024-06-10 12:09:36.279419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.510 [2024-06-10 12:09:36.279425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.510 [2024-06-10 12:09:36.279429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.510 [2024-06-10 12:09:36.279440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.770 [2024-06-10 12:09:36.289315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.770 [2024-06-10 12:09:36.289364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.770 [2024-06-10 12:09:36.289376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.289381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.289385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.289396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.299353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.299457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.299470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.299475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.299480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.299490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.309375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.309438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.309450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.309456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.309460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.309474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.319399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.319451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.319463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.319468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.319473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.319483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.329438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.329540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.329552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.329557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.329562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.329572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.339464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.339516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.339528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.339533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.339538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.339548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.349390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.349447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.349459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.349464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.349468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.349479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.359494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.359546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.359561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.359566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.359571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.359581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.369535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.369590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.369601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.369607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.369611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.369622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.379565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.379615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.379627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.379632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.379637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.379647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.389475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.389530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.389545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.389552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.389556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.389569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.399496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.399546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.399558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.399564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.399568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.399582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.409644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.409744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.409756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.771 [2024-06-10 12:09:36.409761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.771 [2024-06-10 12:09:36.409766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.771 [2024-06-10 12:09:36.409776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.771 qpair failed and we were unable to recover it. 00:31:42.771 [2024-06-10 12:09:36.419672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.771 [2024-06-10 12:09:36.419723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.771 [2024-06-10 12:09:36.419734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.419739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.419744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.419754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.429583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.429639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.429652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.429657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.429661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.429672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.439734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.439787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.439800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.439805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.439809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.439820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.449774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.449827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.449841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.449847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.449851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.449861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.459655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.459714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.459725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.459731] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.459735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.459745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.469800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.469856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.469868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.469873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.469877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.469887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.479701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.479754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.479766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.479771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.479776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.479786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.489847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.489900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.489912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.489917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.489924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.489936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.499911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.499974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.499986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.499991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.499996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.500006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.509888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.509944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.509956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.509961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.509965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.509975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.519827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.519881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.519893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.519898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.519902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.519912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.529980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.530034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.530046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.530051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.530055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.530065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:42.772 [2024-06-10 12:09:36.540002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.772 [2024-06-10 12:09:36.540063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.772 [2024-06-10 12:09:36.540075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.772 [2024-06-10 12:09:36.540080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.772 [2024-06-10 12:09:36.540084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:42.772 [2024-06-10 12:09:36.540094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.772 qpair failed and we were unable to recover it. 00:31:43.034 [2024-06-10 12:09:36.550048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.034 [2024-06-10 12:09:36.550107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.034 [2024-06-10 12:09:36.550119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.034 [2024-06-10 12:09:36.550125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.034 [2024-06-10 12:09:36.550129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.034 [2024-06-10 12:09:36.550139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.034 qpair failed and we were unable to recover it. 00:31:43.034 [2024-06-10 12:09:36.560060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.034 [2024-06-10 12:09:36.560107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.034 [2024-06-10 12:09:36.560119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.034 [2024-06-10 12:09:36.560124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.034 [2024-06-10 12:09:36.560129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.034 [2024-06-10 12:09:36.560139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.034 qpair failed and we were unable to recover it. 00:31:43.034 [2024-06-10 12:09:36.570015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.034 [2024-06-10 12:09:36.570068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.034 [2024-06-10 12:09:36.570080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.034 [2024-06-10 12:09:36.570085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.034 [2024-06-10 12:09:36.570089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.034 [2024-06-10 12:09:36.570099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.034 qpair failed and we were unable to recover it. 00:31:43.034 [2024-06-10 12:09:36.580120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.034 [2024-06-10 12:09:36.580211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.034 [2024-06-10 12:09:36.580224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.034 [2024-06-10 12:09:36.580229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.034 [2024-06-10 12:09:36.580238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.034 [2024-06-10 12:09:36.580253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.034 qpair failed and we were unable to recover it. 00:31:43.034 [2024-06-10 12:09:36.590138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.034 [2024-06-10 12:09:36.590195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.034 [2024-06-10 12:09:36.590207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.034 [2024-06-10 12:09:36.590212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.034 [2024-06-10 12:09:36.590216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.034 [2024-06-10 12:09:36.590227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.034 qpair failed and we were unable to recover it. 00:31:43.034 [2024-06-10 12:09:36.600163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.034 [2024-06-10 12:09:36.600267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.034 [2024-06-10 12:09:36.600280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.034 [2024-06-10 12:09:36.600286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.034 [2024-06-10 12:09:36.600291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.034 [2024-06-10 12:09:36.600302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.034 qpair failed and we were unable to recover it. 00:31:43.034 [2024-06-10 12:09:36.610185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.034 [2024-06-10 12:09:36.610235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.034 [2024-06-10 12:09:36.610252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.034 [2024-06-10 12:09:36.610257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.034 [2024-06-10 12:09:36.610261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.034 [2024-06-10 12:09:36.610272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.034 qpair failed and we were unable to recover it. 00:31:43.034 [2024-06-10 12:09:36.620101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.034 [2024-06-10 12:09:36.620151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.034 [2024-06-10 12:09:36.620163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.034 [2024-06-10 12:09:36.620168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.034 [2024-06-10 12:09:36.620172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.034 [2024-06-10 12:09:36.620182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.034 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.630229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.630287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.630298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.630304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.630308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.630318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.640279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.640334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.640346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.640351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.640356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.640366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.650210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.650267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.650279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.650284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.650288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.650299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.660387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.660439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.660450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.660455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.660460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.660470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.670413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.670483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.670494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.670502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.670507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.670517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.680423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.680472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.680484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.680489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.680493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.680504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.690419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.690471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.690483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.690488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.690493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.690503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.700461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.700546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.700557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.700564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.700568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.700579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.710500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.710554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.710566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.710571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.710575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.710585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.720510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.720561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.720573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.720578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.720582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.720592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.730412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.730467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.730478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.730483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.730488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.730498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.740545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.740599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.740611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.740616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.740620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.740631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.750644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.750703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.750715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.750720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.035 [2024-06-10 12:09:36.750724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.035 [2024-06-10 12:09:36.750734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.035 qpair failed and we were unable to recover it. 00:31:43.035 [2024-06-10 12:09:36.760621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.035 [2024-06-10 12:09:36.760679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.035 [2024-06-10 12:09:36.760690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.035 [2024-06-10 12:09:36.760699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.036 [2024-06-10 12:09:36.760704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.036 [2024-06-10 12:09:36.760714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.036 qpair failed and we were unable to recover it. 00:31:43.036 [2024-06-10 12:09:36.770630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.036 [2024-06-10 12:09:36.770717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.036 [2024-06-10 12:09:36.770728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.036 [2024-06-10 12:09:36.770734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.036 [2024-06-10 12:09:36.770738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.036 [2024-06-10 12:09:36.770748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.036 qpair failed and we were unable to recover it. 00:31:43.036 [2024-06-10 12:09:36.780665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.036 [2024-06-10 12:09:36.780732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.036 [2024-06-10 12:09:36.780744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.036 [2024-06-10 12:09:36.780749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.036 [2024-06-10 12:09:36.780754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.036 [2024-06-10 12:09:36.780764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.036 qpair failed and we were unable to recover it. 00:31:43.036 [2024-06-10 12:09:36.790568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.036 [2024-06-10 12:09:36.790624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.036 [2024-06-10 12:09:36.790635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.036 [2024-06-10 12:09:36.790640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.036 [2024-06-10 12:09:36.790645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.036 [2024-06-10 12:09:36.790655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.036 qpair failed and we were unable to recover it. 00:31:43.036 [2024-06-10 12:09:36.800715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.036 [2024-06-10 12:09:36.800768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.036 [2024-06-10 12:09:36.800779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.036 [2024-06-10 12:09:36.800784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.036 [2024-06-10 12:09:36.800789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.036 [2024-06-10 12:09:36.800799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.036 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.810725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.810776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.810788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.810794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.810798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.810808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.820780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.820832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.820844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.820849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.820854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.820864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.830786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.830843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.830855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.830860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.830864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.830874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.840853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.840921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.840933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.840938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.840943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.840954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.850849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.850903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.850925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.850931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.850936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.850949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.860901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.860955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.860974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.860979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.860984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.860997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.870791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.870901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.870914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.870919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.870924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.870935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.880931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.880985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.881004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.881010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.881015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.881028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.890944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.891005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.891024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.891030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.891035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.891051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.900992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.901099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.901118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.298 [2024-06-10 12:09:36.901124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.298 [2024-06-10 12:09:36.901129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.298 [2024-06-10 12:09:36.901143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.298 qpair failed and we were unable to recover it. 00:31:43.298 [2024-06-10 12:09:36.911018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.298 [2024-06-10 12:09:36.911084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.298 [2024-06-10 12:09:36.911097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:36.911102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:36.911107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:36.911119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:36.921040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:36.921090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:36.921102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:36.921107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:36.921112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:36.921123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:36.931103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:36.931174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:36.931186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:36.931191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:36.931195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:36.931206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:36.941118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:36.941172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:36.941187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:36.941192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:36.941197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:36.941207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:36.951134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:36.951187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:36.951199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:36.951204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:36.951209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:36.951219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:36.961146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:36.961199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:36.961211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:36.961216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:36.961221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:36.961231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:36.971159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:36.971213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:36.971225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:36.971230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:36.971234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:36.971248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:36.981211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:36.981276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:36.981288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:36.981293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:36.981300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:36.981311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:36.991290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:36.991349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:36.991361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:36.991366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:36.991371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:36.991381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:37.001264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:37.001351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:37.001364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:37.001369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:37.001374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:37.001384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:37.011302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:37.011354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:37.011366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:37.011371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:37.011375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:37.011385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:37.021364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:37.021441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:37.021452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:37.021457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:37.021462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:37.021473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:37.031225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:37.031292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:37.031304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:37.031309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:37.031314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:37.031324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.299 [2024-06-10 12:09:37.041362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.299 [2024-06-10 12:09:37.041415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.299 [2024-06-10 12:09:37.041427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.299 [2024-06-10 12:09:37.041432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.299 [2024-06-10 12:09:37.041436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.299 [2024-06-10 12:09:37.041447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.299 qpair failed and we were unable to recover it. 00:31:43.300 [2024-06-10 12:09:37.051280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.300 [2024-06-10 12:09:37.051332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.300 [2024-06-10 12:09:37.051344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.300 [2024-06-10 12:09:37.051349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.300 [2024-06-10 12:09:37.051353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.300 [2024-06-10 12:09:37.051364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.300 qpair failed and we were unable to recover it. 00:31:43.300 [2024-06-10 12:09:37.061432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.300 [2024-06-10 12:09:37.061484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.300 [2024-06-10 12:09:37.061495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.300 [2024-06-10 12:09:37.061500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.300 [2024-06-10 12:09:37.061505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.300 [2024-06-10 12:09:37.061515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.300 qpair failed and we were unable to recover it. 00:31:43.561 [2024-06-10 12:09:37.071476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.071534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.071545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.071550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.071558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.071569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.081490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.081540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.081552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.081558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.081562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.081572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.091557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.091647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.091658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.091664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.091669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.091679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.101411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.101463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.101475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.101480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.101484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.101495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.111481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.111542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.111554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.111559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.111563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.111574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.121573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.121630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.121642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.121648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.121652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.121662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.131618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.131673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.131684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.131689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.131694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.131704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.141658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.141743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.141755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.141760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.141764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.141774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.151662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.151719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.151731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.151736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.151740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.151752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.161687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.161742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.161755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.161763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.161767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.161778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.171720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.171772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.171784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.171789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.171794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.171804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.181730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.181779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.181791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.181796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.181801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.181812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.191818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.191897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.191909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.562 [2024-06-10 12:09:37.191914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.562 [2024-06-10 12:09:37.191919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.562 [2024-06-10 12:09:37.191930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.562 qpair failed and we were unable to recover it. 00:31:43.562 [2024-06-10 12:09:37.201866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.562 [2024-06-10 12:09:37.201925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.562 [2024-06-10 12:09:37.201943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.201949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.201954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.201968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.211878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.211930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.211943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.211948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.211953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.211964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.221890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.221944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.221956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.221962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.221966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.221977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.231885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.231941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.231953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.231958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.231963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.231973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.241917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.241971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.241983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.241988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.241993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.242003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.251930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.251977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.251989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.251997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.252002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.252012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.262000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.262081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.262093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.262098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.262103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.262113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.271999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.272047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.272059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.272064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.272069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.272080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.282023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.282076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.282088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.282093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.282098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.282108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.292043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.292091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.292103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.292108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.292112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.292123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.302074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.302125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.302136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.302141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.302146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.302156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.312108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.312162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.312174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.312179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.312184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.312194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.563 [2024-06-10 12:09:37.322132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.563 [2024-06-10 12:09:37.322234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.563 [2024-06-10 12:09:37.322249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.563 [2024-06-10 12:09:37.322254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.563 [2024-06-10 12:09:37.322259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.563 [2024-06-10 12:09:37.322269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.563 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.332159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.332214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.332226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.332231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.332236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.332251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.342222] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.342276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.342291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.342296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.342301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.342311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.352224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.352285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.352297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.352302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.352306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.352317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.362251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.362305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.362317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.362322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.362327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.362337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.372273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.372327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.372338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.372343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.372348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.372358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.382281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.382334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.382346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.382351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.382356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.382369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.392343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.392408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.392420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.392425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.392430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.392441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.402320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.402367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.402378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.402384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.402388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.402399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.412358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.412411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.412423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.412428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.412432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.412443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.422447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.827 [2024-06-10 12:09:37.422497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.827 [2024-06-10 12:09:37.422509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.827 [2024-06-10 12:09:37.422514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.827 [2024-06-10 12:09:37.422518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.827 [2024-06-10 12:09:37.422528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.827 qpair failed and we were unable to recover it. 00:31:43.827 [2024-06-10 12:09:37.432419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.432472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.432486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.432491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.432496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.432506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.442507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.442566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.442578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.442584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.442588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.442598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.452502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.452547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.452559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.452564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.452568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.452579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.462508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.462559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.462571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.462576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.462580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.462591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.472534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.472590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.472602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.472607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.472611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.472625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.482554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.482653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.482666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.482671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.482676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.482686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.492580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.492634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.492646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.492651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.492655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.492665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.502642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.502693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.502705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.502710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.502714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.502725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.512625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.512696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.512707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.512712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.512717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.512727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.522681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.522779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.522796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.522801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.522806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.522816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.532689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.532744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.532756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.532761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.532765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.532776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.542724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.542776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.542788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.542793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.542797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.542807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.552792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.552878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.552890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.552896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.552901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.828 [2024-06-10 12:09:37.552911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.828 qpair failed and we were unable to recover it. 00:31:43.828 [2024-06-10 12:09:37.562767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.828 [2024-06-10 12:09:37.562859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.828 [2024-06-10 12:09:37.562872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.828 [2024-06-10 12:09:37.562877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.828 [2024-06-10 12:09:37.562884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.829 [2024-06-10 12:09:37.562894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.829 qpair failed and we were unable to recover it. 00:31:43.829 [2024-06-10 12:09:37.572677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.829 [2024-06-10 12:09:37.572729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.829 [2024-06-10 12:09:37.572741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.829 [2024-06-10 12:09:37.572746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.829 [2024-06-10 12:09:37.572751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.829 [2024-06-10 12:09:37.572762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.829 qpair failed and we were unable to recover it. 00:31:43.829 [2024-06-10 12:09:37.582848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.829 [2024-06-10 12:09:37.582899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.829 [2024-06-10 12:09:37.582911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.829 [2024-06-10 12:09:37.582916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.829 [2024-06-10 12:09:37.582921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.829 [2024-06-10 12:09:37.582931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.829 qpair failed and we were unable to recover it. 00:31:43.829 [2024-06-10 12:09:37.592862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.829 [2024-06-10 12:09:37.592922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.829 [2024-06-10 12:09:37.592941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.829 [2024-06-10 12:09:37.592946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.829 [2024-06-10 12:09:37.592951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:43.829 [2024-06-10 12:09:37.592965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.829 qpair failed and we were unable to recover it. 00:31:44.091 [2024-06-10 12:09:37.602876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.091 [2024-06-10 12:09:37.602933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.091 [2024-06-10 12:09:37.602947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.091 [2024-06-10 12:09:37.602952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.091 [2024-06-10 12:09:37.602957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.091 [2024-06-10 12:09:37.602968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.091 qpair failed and we were unable to recover it. 00:31:44.091 [2024-06-10 12:09:37.612925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.091 [2024-06-10 12:09:37.612988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.091 [2024-06-10 12:09:37.613007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.091 [2024-06-10 12:09:37.613013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.091 [2024-06-10 12:09:37.613018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.091 [2024-06-10 12:09:37.613032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.091 qpair failed and we were unable to recover it. 00:31:44.091 [2024-06-10 12:09:37.622918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.091 [2024-06-10 12:09:37.622973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.091 [2024-06-10 12:09:37.622992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.091 [2024-06-10 12:09:37.622998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.091 [2024-06-10 12:09:37.623003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.091 [2024-06-10 12:09:37.623016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.091 qpair failed and we were unable to recover it. 00:31:44.091 [2024-06-10 12:09:37.632969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.091 [2024-06-10 12:09:37.633038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.091 [2024-06-10 12:09:37.633057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.091 [2024-06-10 12:09:37.633063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.091 [2024-06-10 12:09:37.633068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.091 [2024-06-10 12:09:37.633081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.091 qpair failed and we were unable to recover it. 00:31:44.091 [2024-06-10 12:09:37.643038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.091 [2024-06-10 12:09:37.643088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.091 [2024-06-10 12:09:37.643103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.091 [2024-06-10 12:09:37.643108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.091 [2024-06-10 12:09:37.643113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.091 [2024-06-10 12:09:37.643125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.653040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.653091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.653103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.653113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.653118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.653129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.663062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.663154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.663167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.663172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.663177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.663187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.673096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.673155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.673167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.673172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.673177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.673187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.683105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.683159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.683171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.683177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.683181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.683192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.693127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.693179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.693190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.693196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.693200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.693211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.703049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.703104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.703117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.703121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.703126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.703136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.713196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.713259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.713271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.713276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.713281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.713292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.723230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.723308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.723320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.723325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.723330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.723341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.733247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.733301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.733313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.733318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.733323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.733333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.743266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.743342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.743354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.743362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.743366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.743376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.753339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.753401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.753413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.753418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.753423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.753434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.763378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.763426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.763438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.763443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.763448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.763458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.773269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.773325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.773337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.092 [2024-06-10 12:09:37.773342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.092 [2024-06-10 12:09:37.773346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.092 [2024-06-10 12:09:37.773357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.092 qpair failed and we were unable to recover it. 00:31:44.092 [2024-06-10 12:09:37.783389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.092 [2024-06-10 12:09:37.783443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.092 [2024-06-10 12:09:37.783456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.093 [2024-06-10 12:09:37.783461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.093 [2024-06-10 12:09:37.783466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1be4000b90 00:31:44.093 [2024-06-10 12:09:37.783478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.093 qpair failed and we were unable to recover it. 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 [2024-06-10 12:09:37.783899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.093 [2024-06-10 12:09:37.793450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.093 [2024-06-10 12:09:37.793519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.093 [2024-06-10 12:09:37.793541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.093 [2024-06-10 12:09:37.793549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.093 [2024-06-10 12:09:37.793555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf6a8b0 00:31:44.093 [2024-06-10 12:09:37.793571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.093 qpair failed and we were unable to recover it. 00:31:44.093 [2024-06-10 12:09:37.803344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.093 [2024-06-10 12:09:37.803416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.093 [2024-06-10 12:09:37.803432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.093 [2024-06-10 12:09:37.803440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.093 [2024-06-10 12:09:37.803446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf6a8b0 00:31:44.093 [2024-06-10 12:09:37.803462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.093 qpair failed and we were unable to recover it. 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Write completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 [2024-06-10 12:09:37.804413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:44.093 [2024-06-10 12:09:37.813504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.093 [2024-06-10 12:09:37.813644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.093 [2024-06-10 12:09:37.813694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.093 [2024-06-10 12:09:37.813716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.093 [2024-06-10 12:09:37.813736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1bdc000b90 00:31:44.093 [2024-06-10 12:09:37.813782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:44.093 qpair failed and we were unable to recover it. 00:31:44.093 [2024-06-10 12:09:37.823578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.093 [2024-06-10 12:09:37.823698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.093 [2024-06-10 12:09:37.823733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.093 [2024-06-10 12:09:37.823750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.093 [2024-06-10 12:09:37.823765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1bdc000b90 00:31:44.093 [2024-06-10 12:09:37.823799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:44.093 qpair failed and we were unable to recover it. 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.093 Read completed with error (sct=0, sc=8) 00:31:44.093 starting I/O failed 00:31:44.094 Read completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Read completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Read completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Read completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Read completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Read completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Read completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Read completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 Write completed with error (sct=0, sc=8) 00:31:44.094 starting I/O failed 00:31:44.094 [2024-06-10 12:09:37.824646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:44.094 [2024-06-10 12:09:37.833570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.094 [2024-06-10 12:09:37.833727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.094 [2024-06-10 12:09:37.833778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.094 [2024-06-10 12:09:37.833801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.094 [2024-06-10 12:09:37.833822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1bec000b90 00:31:44.094 [2024-06-10 12:09:37.833866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:44.094 qpair failed and we were unable to recover it. 00:31:44.094 [2024-06-10 12:09:37.843563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.094 [2024-06-10 12:09:37.843675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.094 [2024-06-10 12:09:37.843706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.094 [2024-06-10 12:09:37.843721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.094 [2024-06-10 12:09:37.843734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1bec000b90 00:31:44.094 [2024-06-10 12:09:37.843765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:44.094 qpair failed and we were unable to recover it. 00:31:44.094 [2024-06-10 12:09:37.844002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68600 is same with the state(5) to be set 00:31:44.094 [2024-06-10 12:09:37.844199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf68600 (9): Bad file descriptor 00:31:44.094 Initializing NVMe Controllers 00:31:44.094 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:44.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:44.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:44.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:44.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:44.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:44.094 Initialization complete. Launching workers. 00:31:44.094 Starting thread on core 1 00:31:44.094 Starting thread on core 2 00:31:44.094 Starting thread on core 3 00:31:44.094 Starting thread on core 0 00:31:44.094 12:09:37 -- host/target_disconnect.sh@59 -- # sync 00:31:44.094 00:31:44.094 real 0m11.476s 00:31:44.094 user 0m20.700s 00:31:44.094 sys 0m3.721s 00:31:44.094 12:09:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:44.094 12:09:37 -- common/autotest_common.sh@10 -- # set +x 00:31:44.094 ************************************ 00:31:44.094 END TEST nvmf_target_disconnect_tc2 00:31:44.094 ************************************ 00:31:44.355 12:09:37 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:31:44.355 12:09:37 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:44.355 12:09:37 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:31:44.355 12:09:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:44.355 12:09:37 -- nvmf/common.sh@116 -- # sync 00:31:44.355 12:09:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:44.355 12:09:37 -- nvmf/common.sh@119 -- # set +e 00:31:44.355 12:09:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:44.355 12:09:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:44.355 rmmod nvme_tcp 00:31:44.355 rmmod nvme_fabrics 00:31:44.355 rmmod nvme_keyring 00:31:44.355 12:09:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:44.355 12:09:37 -- nvmf/common.sh@123 -- # set -e 00:31:44.355 12:09:37 -- nvmf/common.sh@124 -- # return 0 00:31:44.355 12:09:37 -- nvmf/common.sh@477 -- # '[' -n 2157505 ']' 00:31:44.355 12:09:37 -- nvmf/common.sh@478 -- # killprocess 2157505 00:31:44.355 12:09:37 -- common/autotest_common.sh@926 -- # '[' -z 2157505 ']' 00:31:44.355 12:09:37 -- common/autotest_common.sh@930 -- # kill -0 2157505 00:31:44.355 12:09:37 -- common/autotest_common.sh@931 -- # uname 00:31:44.355 12:09:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:44.355 12:09:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2157505 00:31:44.355 12:09:38 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:31:44.355 12:09:38 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:31:44.355 12:09:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2157505' 00:31:44.355 killing process with pid 2157505 00:31:44.355 12:09:38 -- common/autotest_common.sh@945 -- # kill 2157505 00:31:44.355 12:09:38 -- common/autotest_common.sh@950 -- # wait 2157505 00:31:44.615 12:09:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:44.615 12:09:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:44.615 12:09:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:44.615 12:09:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.615 12:09:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:44.615 12:09:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.615 12:09:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:44.615 12:09:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.528 12:09:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:46.528 00:31:46.528 real 0m21.384s 00:31:46.528 user 0m48.820s 00:31:46.528 sys 0m9.513s 00:31:46.528 12:09:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:46.528 12:09:40 -- common/autotest_common.sh@10 -- # set +x 00:31:46.528 ************************************ 00:31:46.528 END TEST nvmf_target_disconnect 00:31:46.528 ************************************ 00:31:46.528 12:09:40 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:31:46.528 12:09:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:46.528 12:09:40 -- common/autotest_common.sh@10 -- # set +x 00:31:46.528 12:09:40 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:31:46.528 00:31:46.528 real 24m22.453s 00:31:46.528 user 64m20.162s 00:31:46.528 sys 6m38.671s 00:31:46.528 12:09:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:46.528 12:09:40 -- common/autotest_common.sh@10 -- # set +x 00:31:46.528 ************************************ 00:31:46.528 END TEST nvmf_tcp 00:31:46.528 ************************************ 00:31:46.788 12:09:40 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:31:46.788 12:09:40 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:46.788 12:09:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:46.788 12:09:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:46.788 12:09:40 -- common/autotest_common.sh@10 -- # set +x 00:31:46.788 ************************************ 00:31:46.788 START TEST spdkcli_nvmf_tcp 00:31:46.788 ************************************ 00:31:46.788 12:09:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:46.788 * Looking for test storage... 00:31:46.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:46.788 12:09:40 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:46.789 12:09:40 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:46.789 12:09:40 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:46.789 12:09:40 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.789 12:09:40 -- nvmf/common.sh@7 -- # uname -s 00:31:46.789 12:09:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.789 12:09:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.789 12:09:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.789 12:09:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.789 12:09:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.789 12:09:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.789 12:09:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.789 12:09:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.789 12:09:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.789 12:09:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.789 12:09:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:46.789 12:09:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:46.789 12:09:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.789 12:09:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.789 12:09:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.789 12:09:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.789 12:09:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.789 12:09:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.789 12:09:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.789 12:09:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.789 12:09:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.789 12:09:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.789 12:09:40 -- paths/export.sh@5 -- # export PATH 00:31:46.789 12:09:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.789 12:09:40 -- nvmf/common.sh@46 -- # : 0 00:31:46.789 12:09:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:46.789 12:09:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:46.789 12:09:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:46.789 12:09:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.789 12:09:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.789 12:09:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:46.789 12:09:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:46.789 12:09:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:46.789 12:09:40 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:46.789 12:09:40 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:46.789 12:09:40 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:46.789 12:09:40 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:46.789 12:09:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:46.789 12:09:40 -- common/autotest_common.sh@10 -- # set +x 00:31:46.789 12:09:40 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:46.789 12:09:40 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2159341 00:31:46.789 12:09:40 -- spdkcli/common.sh@34 -- # waitforlisten 2159341 00:31:46.789 12:09:40 -- common/autotest_common.sh@819 -- # '[' -z 2159341 ']' 00:31:46.789 12:09:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.789 12:09:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:46.789 12:09:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.789 12:09:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:46.789 12:09:40 -- common/autotest_common.sh@10 -- # set +x 00:31:46.789 12:09:40 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:46.789 [2024-06-10 12:09:40.503132] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:46.789 [2024-06-10 12:09:40.503188] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159341 ] 00:31:46.789 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.048 [2024-06-10 12:09:40.563080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:47.048 [2024-06-10 12:09:40.627097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:47.048 [2024-06-10 12:09:40.627289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.048 [2024-06-10 12:09:40.627441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.620 12:09:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:47.620 12:09:41 -- common/autotest_common.sh@852 -- # return 0 00:31:47.620 12:09:41 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:47.620 12:09:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:47.620 12:09:41 -- common/autotest_common.sh@10 -- # set +x 00:31:47.620 12:09:41 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:47.620 12:09:41 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:47.620 12:09:41 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:47.620 12:09:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:47.620 12:09:41 -- common/autotest_common.sh@10 -- # set +x 00:31:47.620 12:09:41 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:47.620 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:47.620 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:47.620 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:47.620 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:47.620 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:47.620 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:47.620 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:47.620 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:47.620 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:47.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:47.620 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:47.620 ' 00:31:47.895 [2024-06-10 12:09:41.603759] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:50.443 [2024-06-10 12:09:43.862543] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.828 [2024-06-10 12:09:45.162842] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:54.385 [2024-06-10 12:09:47.574024] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:56.299 [2024-06-10 12:09:49.656398] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:57.685 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:57.685 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:57.685 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:57.685 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:57.685 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:57.685 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:57.685 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:57.685 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:57.685 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:57.685 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:57.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:57.685 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:57.685 12:09:51 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:57.685 12:09:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:57.685 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:31:57.685 12:09:51 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:57.685 12:09:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:57.685 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:31:57.685 12:09:51 -- spdkcli/nvmf.sh@69 -- # check_match 00:31:57.685 12:09:51 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:57.946 12:09:51 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:58.208 12:09:51 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:58.208 12:09:51 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:58.208 12:09:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:58.208 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:31:58.208 12:09:51 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:58.208 12:09:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:58.208 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:31:58.208 12:09:51 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:58.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:58.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:58.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:58.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:58.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:58.208 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:58.208 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:58.208 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:58.208 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:58.208 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:58.208 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:58.208 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:58.208 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:58.208 ' 00:32:03.497 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:03.498 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:03.498 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:03.498 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:03.498 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:03.498 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:03.498 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:03.498 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:03.498 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:03.498 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:03.498 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:03.498 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:03.498 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:03.498 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:03.498 12:09:57 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:03.498 12:09:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:03.498 12:09:57 -- common/autotest_common.sh@10 -- # set +x 00:32:03.498 12:09:57 -- spdkcli/nvmf.sh@90 -- # killprocess 2159341 00:32:03.498 12:09:57 -- common/autotest_common.sh@926 -- # '[' -z 2159341 ']' 00:32:03.498 12:09:57 -- common/autotest_common.sh@930 -- # kill -0 2159341 00:32:03.498 12:09:57 -- common/autotest_common.sh@931 -- # uname 00:32:03.498 12:09:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:03.498 12:09:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2159341 00:32:03.758 12:09:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:03.758 12:09:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:03.758 12:09:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2159341' 00:32:03.758 killing process with pid 2159341 00:32:03.758 12:09:57 -- common/autotest_common.sh@945 -- # kill 2159341 00:32:03.758 [2024-06-10 12:09:57.293114] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:03.758 12:09:57 -- common/autotest_common.sh@950 -- # wait 2159341 00:32:03.758 12:09:57 -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:03.758 12:09:57 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:03.758 12:09:57 -- spdkcli/common.sh@13 -- # '[' -n 2159341 ']' 00:32:03.758 12:09:57 -- spdkcli/common.sh@14 -- # killprocess 2159341 00:32:03.758 12:09:57 -- common/autotest_common.sh@926 -- # '[' -z 2159341 ']' 00:32:03.758 12:09:57 -- common/autotest_common.sh@930 -- # kill -0 2159341 00:32:03.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2159341) - No such process 00:32:03.758 12:09:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2159341 is not found' 00:32:03.758 Process with pid 2159341 is not found 00:32:03.758 12:09:57 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:03.758 12:09:57 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:03.758 12:09:57 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:03.758 00:32:03.758 real 0m17.079s 00:32:03.758 user 0m37.297s 00:32:03.758 sys 0m0.797s 00:32:03.758 12:09:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.758 12:09:57 -- common/autotest_common.sh@10 -- # set +x 00:32:03.758 ************************************ 00:32:03.758 END TEST spdkcli_nvmf_tcp 00:32:03.758 ************************************ 00:32:03.758 12:09:57 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:03.758 12:09:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:03.758 12:09:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:03.758 12:09:57 -- common/autotest_common.sh@10 -- # set +x 00:32:03.758 ************************************ 00:32:03.758 START TEST nvmf_identify_passthru 00:32:03.758 ************************************ 00:32:03.758 12:09:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:04.019 * Looking for test storage... 00:32:04.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:04.019 12:09:57 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.019 12:09:57 -- nvmf/common.sh@7 -- # uname -s 00:32:04.019 12:09:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.019 12:09:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.019 12:09:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.019 12:09:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.019 12:09:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.019 12:09:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.019 12:09:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.019 12:09:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.019 12:09:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.019 12:09:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.019 12:09:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:04.019 12:09:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:04.019 12:09:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.019 12:09:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.019 12:09:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.019 12:09:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.020 12:09:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.020 12:09:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.020 12:09:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.020 12:09:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.020 12:09:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.020 12:09:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.020 12:09:57 -- paths/export.sh@5 -- # export PATH 00:32:04.020 12:09:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.020 12:09:57 -- nvmf/common.sh@46 -- # : 0 00:32:04.020 12:09:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:04.020 12:09:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:04.020 12:09:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:04.020 12:09:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.020 12:09:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.020 12:09:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:04.020 12:09:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:04.020 12:09:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:04.020 12:09:57 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.020 12:09:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.020 12:09:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.020 12:09:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.020 12:09:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.020 12:09:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.020 12:09:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.020 12:09:57 -- paths/export.sh@5 -- # export PATH 00:32:04.020 12:09:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.020 12:09:57 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:04.020 12:09:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:04.020 12:09:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.020 12:09:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:04.020 12:09:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:04.020 12:09:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:04.020 12:09:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.020 12:09:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:04.020 12:09:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.020 12:09:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:04.020 12:09:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:04.020 12:09:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:04.020 12:09:57 -- common/autotest_common.sh@10 -- # set +x 00:32:10.609 12:10:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:10.609 12:10:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:10.609 12:10:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:10.609 12:10:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:10.609 12:10:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:10.609 12:10:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:10.609 12:10:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:10.609 12:10:04 -- nvmf/common.sh@294 -- # net_devs=() 00:32:10.609 12:10:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:10.609 12:10:04 -- nvmf/common.sh@295 -- # e810=() 00:32:10.609 12:10:04 -- nvmf/common.sh@295 -- # local -ga e810 00:32:10.609 12:10:04 -- nvmf/common.sh@296 -- # x722=() 00:32:10.609 12:10:04 -- nvmf/common.sh@296 -- # local -ga x722 00:32:10.609 12:10:04 -- nvmf/common.sh@297 -- # mlx=() 00:32:10.609 12:10:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:10.609 12:10:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.609 12:10:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:10.609 12:10:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:10.609 12:10:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:10.609 12:10:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:10.609 12:10:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:10.609 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:10.609 12:10:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:10.609 12:10:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:10.609 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:10.609 12:10:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:10.609 12:10:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:10.609 12:10:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:10.609 12:10:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.609 12:10:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:10.609 12:10:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.609 12:10:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:10.609 Found net devices under 0000:31:00.0: cvl_0_0 00:32:10.609 12:10:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.609 12:10:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:10.609 12:10:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.609 12:10:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:10.609 12:10:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.609 12:10:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:10.609 Found net devices under 0000:31:00.1: cvl_0_1 00:32:10.609 12:10:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.609 12:10:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:10.610 12:10:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:10.610 12:10:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:10.610 12:10:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:10.610 12:10:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:10.610 12:10:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.610 12:10:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.610 12:10:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:10.610 12:10:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:10.610 12:10:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:10.610 12:10:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:10.610 12:10:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:10.610 12:10:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:10.610 12:10:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.610 12:10:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:10.610 12:10:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:10.610 12:10:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:10.610 12:10:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.871 12:10:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.871 12:10:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.871 12:10:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:10.871 12:10:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.871 12:10:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.871 12:10:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.871 12:10:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:10.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:32:10.871 00:32:10.871 --- 10.0.0.2 ping statistics --- 00:32:10.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.871 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:32:10.871 12:10:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:32:10.871 00:32:10.871 --- 10.0.0.1 ping statistics --- 00:32:10.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.871 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:32:10.871 12:10:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.871 12:10:04 -- nvmf/common.sh@410 -- # return 0 00:32:10.871 12:10:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:10.871 12:10:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.871 12:10:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:10.871 12:10:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:10.871 12:10:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.871 12:10:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:10.871 12:10:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:10.871 12:10:04 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:10.871 12:10:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:10.871 12:10:04 -- common/autotest_common.sh@10 -- # set +x 00:32:10.871 12:10:04 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:10.871 12:10:04 -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:10.871 12:10:04 -- common/autotest_common.sh@1509 -- # local bdfs 00:32:10.871 12:10:04 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:10.871 12:10:04 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:10.871 12:10:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:10.871 12:10:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:10.871 12:10:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:10.871 12:10:04 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:10.871 12:10:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:11.132 12:10:04 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:11.132 12:10:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:32:11.132 12:10:04 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:32:11.132 12:10:04 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:32:11.132 12:10:04 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:32:11.132 12:10:04 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:11.132 12:10:04 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:11.132 12:10:04 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:11.132 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.393 12:10:05 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:32:11.393 12:10:05 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:11.393 12:10:05 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:11.393 12:10:05 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:11.654 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.914 12:10:05 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:32:11.914 12:10:05 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:11.914 12:10:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:11.914 12:10:05 -- common/autotest_common.sh@10 -- # set +x 00:32:11.914 12:10:05 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:11.914 12:10:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:11.914 12:10:05 -- common/autotest_common.sh@10 -- # set +x 00:32:11.914 12:10:05 -- target/identify_passthru.sh@31 -- # nvmfpid=2166540 00:32:11.915 12:10:05 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:11.915 12:10:05 -- target/identify_passthru.sh@35 -- # waitforlisten 2166540 00:32:11.915 12:10:05 -- common/autotest_common.sh@819 -- # '[' -z 2166540 ']' 00:32:11.915 12:10:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.915 12:10:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:11.915 12:10:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.915 12:10:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:11.915 12:10:05 -- common/autotest_common.sh@10 -- # set +x 00:32:11.915 12:10:05 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:12.175 [2024-06-10 12:10:05.712340] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:12.175 [2024-06-10 12:10:05.712435] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.175 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.175 [2024-06-10 12:10:05.783180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:12.175 [2024-06-10 12:10:05.850997] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:12.175 [2024-06-10 12:10:05.851131] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.175 [2024-06-10 12:10:05.851140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.175 [2024-06-10 12:10:05.851149] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.175 [2024-06-10 12:10:05.851277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.175 [2024-06-10 12:10:05.851496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.175 [2024-06-10 12:10:05.851497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:12.175 [2024-06-10 12:10:05.851345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:12.747 12:10:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:12.747 12:10:06 -- common/autotest_common.sh@852 -- # return 0 00:32:12.747 12:10:06 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:12.747 12:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.747 12:10:06 -- common/autotest_common.sh@10 -- # set +x 00:32:12.747 INFO: Log level set to 20 00:32:12.747 INFO: Requests: 00:32:12.747 { 00:32:12.747 "jsonrpc": "2.0", 00:32:12.747 "method": "nvmf_set_config", 00:32:12.747 "id": 1, 00:32:12.747 "params": { 00:32:12.747 "admin_cmd_passthru": { 00:32:12.747 "identify_ctrlr": true 00:32:12.747 } 00:32:12.747 } 00:32:12.747 } 00:32:12.747 00:32:12.747 INFO: response: 00:32:12.747 { 00:32:12.747 "jsonrpc": "2.0", 00:32:12.747 "id": 1, 00:32:12.747 "result": true 00:32:12.747 } 00:32:12.747 00:32:12.747 12:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.747 12:10:06 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:12.747 12:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.747 12:10:06 -- common/autotest_common.sh@10 -- # set +x 00:32:12.747 INFO: Setting log level to 20 00:32:12.747 INFO: Setting log level to 20 00:32:12.747 INFO: Log level set to 20 00:32:12.747 INFO: Log level set to 20 00:32:12.747 INFO: Requests: 00:32:12.747 { 00:32:12.747 "jsonrpc": "2.0", 00:32:12.747 "method": "framework_start_init", 00:32:12.747 "id": 1 00:32:12.747 } 00:32:12.747 00:32:12.747 INFO: Requests: 00:32:12.747 { 00:32:12.747 "jsonrpc": "2.0", 00:32:12.747 "method": "framework_start_init", 00:32:12.747 "id": 1 00:32:12.747 } 00:32:12.747 00:32:13.008 [2024-06-10 12:10:06.554666] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:13.008 INFO: response: 00:32:13.008 { 00:32:13.008 "jsonrpc": "2.0", 00:32:13.008 "id": 1, 00:32:13.008 "result": true 00:32:13.008 } 00:32:13.008 00:32:13.008 INFO: response: 00:32:13.008 { 00:32:13.008 "jsonrpc": "2.0", 00:32:13.008 "id": 1, 00:32:13.008 "result": true 00:32:13.008 } 00:32:13.008 00:32:13.008 12:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.008 12:10:06 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:13.008 12:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.008 12:10:06 -- common/autotest_common.sh@10 -- # set +x 00:32:13.008 INFO: Setting log level to 40 00:32:13.008 INFO: Setting log level to 40 00:32:13.008 INFO: Setting log level to 40 00:32:13.008 [2024-06-10 12:10:06.567900] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.008 12:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.008 12:10:06 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:13.008 12:10:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:13.008 12:10:06 -- common/autotest_common.sh@10 -- # set +x 00:32:13.008 12:10:06 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:32:13.008 12:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.008 12:10:06 -- common/autotest_common.sh@10 -- # set +x 00:32:13.277 Nvme0n1 00:32:13.277 12:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.277 12:10:06 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:13.277 12:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.277 12:10:06 -- common/autotest_common.sh@10 -- # set +x 00:32:13.277 12:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.277 12:10:06 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:13.277 12:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.277 12:10:06 -- common/autotest_common.sh@10 -- # set +x 00:32:13.277 12:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.277 12:10:06 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.277 12:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.277 12:10:06 -- common/autotest_common.sh@10 -- # set +x 00:32:13.277 [2024-06-10 12:10:06.948467] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.277 12:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.277 12:10:06 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:13.277 12:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.277 12:10:06 -- common/autotest_common.sh@10 -- # set +x 00:32:13.277 [2024-06-10 12:10:06.960254] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:13.277 [ 00:32:13.277 { 00:32:13.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:13.277 "subtype": "Discovery", 00:32:13.277 "listen_addresses": [], 00:32:13.277 "allow_any_host": true, 00:32:13.277 "hosts": [] 00:32:13.277 }, 00:32:13.277 { 00:32:13.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.277 "subtype": "NVMe", 00:32:13.277 "listen_addresses": [ 00:32:13.277 { 00:32:13.277 "transport": "TCP", 00:32:13.277 "trtype": "TCP", 00:32:13.277 "adrfam": "IPv4", 00:32:13.277 "traddr": "10.0.0.2", 00:32:13.277 "trsvcid": "4420" 00:32:13.277 } 00:32:13.277 ], 00:32:13.277 "allow_any_host": true, 00:32:13.277 "hosts": [], 00:32:13.277 "serial_number": "SPDK00000000000001", 00:32:13.277 "model_number": "SPDK bdev Controller", 00:32:13.277 "max_namespaces": 1, 00:32:13.277 "min_cntlid": 1, 00:32:13.277 "max_cntlid": 65519, 00:32:13.277 "namespaces": [ 00:32:13.277 { 00:32:13.277 "nsid": 1, 00:32:13.277 "bdev_name": "Nvme0n1", 00:32:13.277 "name": "Nvme0n1", 00:32:13.277 "nguid": "36344730526054940025384500000027", 00:32:13.277 "uuid": "36344730-5260-5494-0025-384500000027" 00:32:13.277 } 00:32:13.277 ] 00:32:13.277 } 00:32:13.277 ] 00:32:13.277 12:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.277 12:10:06 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:13.277 12:10:06 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:13.277 12:10:06 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:13.277 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.537 12:10:07 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:32:13.537 12:10:07 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:13.537 12:10:07 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:13.537 12:10:07 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:13.537 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.798 12:10:07 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:32:13.798 12:10:07 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:32:13.798 12:10:07 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:32:13.798 12:10:07 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:13.798 12:10:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.798 12:10:07 -- common/autotest_common.sh@10 -- # set +x 00:32:13.798 12:10:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.798 12:10:07 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:13.798 12:10:07 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:13.798 12:10:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:13.798 12:10:07 -- nvmf/common.sh@116 -- # sync 00:32:13.798 12:10:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:13.798 12:10:07 -- nvmf/common.sh@119 -- # set +e 00:32:13.798 12:10:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:13.798 12:10:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:13.798 rmmod nvme_tcp 00:32:13.798 rmmod nvme_fabrics 00:32:13.798 rmmod nvme_keyring 00:32:13.798 12:10:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:13.798 12:10:07 -- nvmf/common.sh@123 -- # set -e 00:32:13.798 12:10:07 -- nvmf/common.sh@124 -- # return 0 00:32:13.798 12:10:07 -- nvmf/common.sh@477 -- # '[' -n 2166540 ']' 00:32:13.798 12:10:07 -- nvmf/common.sh@478 -- # killprocess 2166540 00:32:13.798 12:10:07 -- common/autotest_common.sh@926 -- # '[' -z 2166540 ']' 00:32:13.798 12:10:07 -- common/autotest_common.sh@930 -- # kill -0 2166540 00:32:13.798 12:10:07 -- common/autotest_common.sh@931 -- # uname 00:32:13.798 12:10:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:13.798 12:10:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2166540 00:32:13.798 12:10:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:13.798 12:10:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:13.798 12:10:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2166540' 00:32:13.798 killing process with pid 2166540 00:32:13.798 12:10:07 -- common/autotest_common.sh@945 -- # kill 2166540 00:32:13.798 [2024-06-10 12:10:07.502668] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:13.798 12:10:07 -- common/autotest_common.sh@950 -- # wait 2166540 00:32:14.059 12:10:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:14.059 12:10:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:14.059 12:10:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:14.059 12:10:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:14.059 12:10:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:14.059 12:10:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.059 12:10:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:14.059 12:10:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.605 12:10:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:16.605 00:32:16.605 real 0m12.364s 00:32:16.605 user 0m9.750s 00:32:16.605 sys 0m5.978s 00:32:16.605 12:10:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:16.605 12:10:09 -- common/autotest_common.sh@10 -- # set +x 00:32:16.605 ************************************ 00:32:16.605 END TEST nvmf_identify_passthru 00:32:16.605 ************************************ 00:32:16.605 12:10:09 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:16.605 12:10:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:16.605 12:10:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:16.605 12:10:09 -- common/autotest_common.sh@10 -- # set +x 00:32:16.605 ************************************ 00:32:16.605 START TEST nvmf_dif 00:32:16.605 ************************************ 00:32:16.605 12:10:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:16.605 * Looking for test storage... 00:32:16.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:16.605 12:10:09 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.605 12:10:09 -- nvmf/common.sh@7 -- # uname -s 00:32:16.605 12:10:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.605 12:10:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.605 12:10:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.605 12:10:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.605 12:10:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.605 12:10:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.605 12:10:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.605 12:10:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.605 12:10:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.605 12:10:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.605 12:10:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:16.605 12:10:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:16.605 12:10:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.605 12:10:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.605 12:10:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.605 12:10:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.605 12:10:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.605 12:10:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.605 12:10:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.605 12:10:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.605 12:10:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.605 12:10:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.605 12:10:09 -- paths/export.sh@5 -- # export PATH 00:32:16.605 12:10:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.605 12:10:09 -- nvmf/common.sh@46 -- # : 0 00:32:16.605 12:10:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:16.605 12:10:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:16.605 12:10:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:16.605 12:10:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.605 12:10:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.605 12:10:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:16.605 12:10:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:16.605 12:10:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:16.605 12:10:10 -- target/dif.sh@15 -- # NULL_META=16 00:32:16.605 12:10:10 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:16.605 12:10:10 -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:16.605 12:10:10 -- target/dif.sh@15 -- # NULL_DIF=1 00:32:16.605 12:10:10 -- target/dif.sh@135 -- # nvmftestinit 00:32:16.605 12:10:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:16.605 12:10:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.605 12:10:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:16.605 12:10:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:16.605 12:10:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:16.605 12:10:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.605 12:10:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:16.605 12:10:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.605 12:10:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:16.605 12:10:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:16.605 12:10:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:16.605 12:10:10 -- common/autotest_common.sh@10 -- # set +x 00:32:23.203 12:10:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:23.203 12:10:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:23.203 12:10:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:23.203 12:10:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:23.203 12:10:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:23.203 12:10:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:23.203 12:10:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:23.203 12:10:16 -- nvmf/common.sh@294 -- # net_devs=() 00:32:23.203 12:10:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:23.203 12:10:16 -- nvmf/common.sh@295 -- # e810=() 00:32:23.203 12:10:16 -- nvmf/common.sh@295 -- # local -ga e810 00:32:23.203 12:10:16 -- nvmf/common.sh@296 -- # x722=() 00:32:23.203 12:10:16 -- nvmf/common.sh@296 -- # local -ga x722 00:32:23.203 12:10:16 -- nvmf/common.sh@297 -- # mlx=() 00:32:23.203 12:10:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:23.203 12:10:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.203 12:10:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:23.203 12:10:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:23.203 12:10:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:23.203 12:10:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:23.203 12:10:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:23.203 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:23.203 12:10:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:23.203 12:10:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:23.203 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:23.203 12:10:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:23.203 12:10:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:23.203 12:10:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.203 12:10:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:23.203 12:10:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.203 12:10:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:23.203 Found net devices under 0000:31:00.0: cvl_0_0 00:32:23.203 12:10:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.203 12:10:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:23.203 12:10:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.203 12:10:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:23.203 12:10:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.203 12:10:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:23.203 Found net devices under 0000:31:00.1: cvl_0_1 00:32:23.203 12:10:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.203 12:10:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:23.203 12:10:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:23.203 12:10:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:23.203 12:10:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:23.203 12:10:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.203 12:10:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.204 12:10:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.204 12:10:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:23.204 12:10:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.204 12:10:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.204 12:10:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:23.204 12:10:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.204 12:10:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.204 12:10:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:23.204 12:10:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:23.204 12:10:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.204 12:10:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.204 12:10:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.464 12:10:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.464 12:10:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:23.464 12:10:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.464 12:10:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.464 12:10:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.464 12:10:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:23.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:32:23.464 00:32:23.464 --- 10.0.0.2 ping statistics --- 00:32:23.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.464 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:32:23.464 12:10:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:32:23.464 00:32:23.464 --- 10.0.0.1 ping statistics --- 00:32:23.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.464 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:32:23.464 12:10:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.464 12:10:17 -- nvmf/common.sh@410 -- # return 0 00:32:23.464 12:10:17 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:23.464 12:10:17 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:26.763 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:26.763 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:26.763 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:26.763 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:26.763 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:26.764 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:26.764 12:10:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.764 12:10:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:26.764 12:10:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:26.764 12:10:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.764 12:10:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:26.764 12:10:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:26.764 12:10:20 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:26.764 12:10:20 -- target/dif.sh@137 -- # nvmfappstart 00:32:26.764 12:10:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:26.764 12:10:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:26.764 12:10:20 -- common/autotest_common.sh@10 -- # set +x 00:32:26.764 12:10:20 -- nvmf/common.sh@469 -- # nvmfpid=2172538 00:32:26.764 12:10:20 -- nvmf/common.sh@470 -- # waitforlisten 2172538 00:32:26.764 12:10:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:26.764 12:10:20 -- common/autotest_common.sh@819 -- # '[' -z 2172538 ']' 00:32:26.764 12:10:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.764 12:10:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:26.764 12:10:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.764 12:10:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:26.764 12:10:20 -- common/autotest_common.sh@10 -- # set +x 00:32:26.764 [2024-06-10 12:10:20.416150] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:26.764 [2024-06-10 12:10:20.416203] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.764 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.764 [2024-06-10 12:10:20.484222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.024 [2024-06-10 12:10:20.549661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:27.024 [2024-06-10 12:10:20.549788] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.024 [2024-06-10 12:10:20.549797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.024 [2024-06-10 12:10:20.549804] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.024 [2024-06-10 12:10:20.549823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.595 12:10:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:27.595 12:10:21 -- common/autotest_common.sh@852 -- # return 0 00:32:27.595 12:10:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:27.595 12:10:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:27.595 12:10:21 -- common/autotest_common.sh@10 -- # set +x 00:32:27.595 12:10:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.595 12:10:21 -- target/dif.sh@139 -- # create_transport 00:32:27.595 12:10:21 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:27.595 12:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.595 12:10:21 -- common/autotest_common.sh@10 -- # set +x 00:32:27.595 [2024-06-10 12:10:21.213066] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.595 12:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.595 12:10:21 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:27.595 12:10:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:27.595 12:10:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:27.595 12:10:21 -- common/autotest_common.sh@10 -- # set +x 00:32:27.595 ************************************ 00:32:27.595 START TEST fio_dif_1_default 00:32:27.595 ************************************ 00:32:27.595 12:10:21 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:32:27.595 12:10:21 -- target/dif.sh@86 -- # create_subsystems 0 00:32:27.595 12:10:21 -- target/dif.sh@28 -- # local sub 00:32:27.595 12:10:21 -- target/dif.sh@30 -- # for sub in "$@" 00:32:27.595 12:10:21 -- target/dif.sh@31 -- # create_subsystem 0 00:32:27.595 12:10:21 -- target/dif.sh@18 -- # local sub_id=0 00:32:27.595 12:10:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:27.595 12:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.595 12:10:21 -- common/autotest_common.sh@10 -- # set +x 00:32:27.595 bdev_null0 00:32:27.595 12:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.595 12:10:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:27.595 12:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.595 12:10:21 -- common/autotest_common.sh@10 -- # set +x 00:32:27.595 12:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.595 12:10:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:27.595 12:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.596 12:10:21 -- common/autotest_common.sh@10 -- # set +x 00:32:27.596 12:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.596 12:10:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:27.596 12:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.596 12:10:21 -- common/autotest_common.sh@10 -- # set +x 00:32:27.596 [2024-06-10 12:10:21.269325] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.596 12:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.596 12:10:21 -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:27.596 12:10:21 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:27.596 12:10:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:27.596 12:10:21 -- nvmf/common.sh@520 -- # config=() 00:32:27.596 12:10:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.596 12:10:21 -- nvmf/common.sh@520 -- # local subsystem config 00:32:27.596 12:10:21 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.596 12:10:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:27.596 12:10:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:27.596 { 00:32:27.596 "params": { 00:32:27.596 "name": "Nvme$subsystem", 00:32:27.596 "trtype": "$TEST_TRANSPORT", 00:32:27.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.596 "adrfam": "ipv4", 00:32:27.596 "trsvcid": "$NVMF_PORT", 00:32:27.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.596 "hdgst": ${hdgst:-false}, 00:32:27.596 "ddgst": ${ddgst:-false} 00:32:27.596 }, 00:32:27.596 "method": "bdev_nvme_attach_controller" 00:32:27.596 } 00:32:27.596 EOF 00:32:27.596 )") 00:32:27.596 12:10:21 -- target/dif.sh@82 -- # gen_fio_conf 00:32:27.596 12:10:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:27.596 12:10:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:27.596 12:10:21 -- target/dif.sh@54 -- # local file 00:32:27.596 12:10:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:27.596 12:10:21 -- target/dif.sh@56 -- # cat 00:32:27.596 12:10:21 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.596 12:10:21 -- common/autotest_common.sh@1320 -- # shift 00:32:27.596 12:10:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:27.596 12:10:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.596 12:10:21 -- nvmf/common.sh@542 -- # cat 00:32:27.596 12:10:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:27.596 12:10:21 -- target/dif.sh@72 -- # (( file <= files )) 00:32:27.596 12:10:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.596 12:10:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:27.596 12:10:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:27.596 12:10:21 -- nvmf/common.sh@544 -- # jq . 00:32:27.596 12:10:21 -- nvmf/common.sh@545 -- # IFS=, 00:32:27.596 12:10:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:27.596 "params": { 00:32:27.596 "name": "Nvme0", 00:32:27.596 "trtype": "tcp", 00:32:27.596 "traddr": "10.0.0.2", 00:32:27.596 "adrfam": "ipv4", 00:32:27.596 "trsvcid": "4420", 00:32:27.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.596 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.596 "hdgst": false, 00:32:27.596 "ddgst": false 00:32:27.596 }, 00:32:27.596 "method": "bdev_nvme_attach_controller" 00:32:27.596 }' 00:32:27.596 12:10:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:27.596 12:10:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:27.596 12:10:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.596 12:10:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.596 12:10:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:27.596 12:10:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:27.596 12:10:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:27.596 12:10:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:27.596 12:10:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:27.596 12:10:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.190 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:28.190 fio-3.35 00:32:28.190 Starting 1 thread 00:32:28.190 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.449 [2024-06-10 12:10:22.043746] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:28.449 [2024-06-10 12:10:22.043792] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:38.445 00:32:38.445 filename0: (groupid=0, jobs=1): err= 0: pid=2173049: Mon Jun 10 12:10:32 2024 00:32:38.445 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:32:38.445 slat (nsec): min=5335, max=53306, avg=6391.26, stdev=2350.88 00:32:38.445 clat (usec): min=41792, max=42944, avg=41988.46, stdev=76.58 00:32:38.445 lat (usec): min=41800, max=42949, avg=41994.85, stdev=76.71 00:32:38.445 clat percentiles (usec): 00:32:38.445 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:32:38.445 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:38.445 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:38.445 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:38.445 | 99.99th=[42730] 00:32:38.445 bw ( KiB/s): min= 352, max= 384, per=99.77%, avg=380.80, stdev= 9.85, samples=20 00:32:38.445 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:32:38.445 lat (msec) : 50=100.00% 00:32:38.445 cpu : usr=96.23%, sys=3.55%, ctx=14, majf=0, minf=214 00:32:38.445 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.445 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.445 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:38.445 00:32:38.445 Run status group 0 (all jobs): 00:32:38.445 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10040-10040msec 00:32:38.706 12:10:32 -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:38.706 12:10:32 -- target/dif.sh@43 -- # local sub 00:32:38.706 12:10:32 -- target/dif.sh@45 -- # for sub in "$@" 00:32:38.706 12:10:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:38.706 12:10:32 -- target/dif.sh@36 -- # local sub_id=0 00:32:38.706 12:10:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:38.706 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.706 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.706 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.706 12:10:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:38.706 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.706 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.706 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.706 00:32:38.706 real 0m11.161s 00:32:38.706 user 0m28.538s 00:32:38.706 sys 0m0.691s 00:32:38.706 12:10:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.706 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.706 ************************************ 00:32:38.706 END TEST fio_dif_1_default 00:32:38.706 ************************************ 00:32:38.706 12:10:32 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:38.706 12:10:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:38.706 12:10:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:38.706 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.706 ************************************ 00:32:38.706 START TEST fio_dif_1_multi_subsystems 00:32:38.706 ************************************ 00:32:38.706 12:10:32 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:32:38.706 12:10:32 -- target/dif.sh@92 -- # local files=1 00:32:38.706 12:10:32 -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:38.706 12:10:32 -- target/dif.sh@28 -- # local sub 00:32:38.706 12:10:32 -- target/dif.sh@30 -- # for sub in "$@" 00:32:38.706 12:10:32 -- target/dif.sh@31 -- # create_subsystem 0 00:32:38.706 12:10:32 -- target/dif.sh@18 -- # local sub_id=0 00:32:38.706 12:10:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:38.706 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.706 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.706 bdev_null0 00:32:38.706 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.706 12:10:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:38.706 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.706 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.706 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.706 12:10:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:38.706 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.706 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.706 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.706 12:10:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:38.706 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.706 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.706 [2024-06-10 12:10:32.477373] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.966 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.966 12:10:32 -- target/dif.sh@30 -- # for sub in "$@" 00:32:38.966 12:10:32 -- target/dif.sh@31 -- # create_subsystem 1 00:32:38.966 12:10:32 -- target/dif.sh@18 -- # local sub_id=1 00:32:38.966 12:10:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:38.966 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.966 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.966 bdev_null1 00:32:38.966 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.966 12:10:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:38.966 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.966 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.966 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.966 12:10:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:38.966 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.966 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.966 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.966 12:10:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.966 12:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.966 12:10:32 -- common/autotest_common.sh@10 -- # set +x 00:32:38.966 12:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.966 12:10:32 -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:38.966 12:10:32 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:38.966 12:10:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:38.966 12:10:32 -- nvmf/common.sh@520 -- # config=() 00:32:38.966 12:10:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:38.966 12:10:32 -- nvmf/common.sh@520 -- # local subsystem config 00:32:38.966 12:10:32 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:38.966 12:10:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:38.966 12:10:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:38.966 { 00:32:38.966 "params": { 00:32:38.966 "name": "Nvme$subsystem", 00:32:38.966 "trtype": "$TEST_TRANSPORT", 00:32:38.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.966 "adrfam": "ipv4", 00:32:38.966 "trsvcid": "$NVMF_PORT", 00:32:38.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.966 "hdgst": ${hdgst:-false}, 00:32:38.966 "ddgst": ${ddgst:-false} 00:32:38.966 }, 00:32:38.966 "method": "bdev_nvme_attach_controller" 00:32:38.966 } 00:32:38.966 EOF 00:32:38.966 )") 00:32:38.966 12:10:32 -- target/dif.sh@82 -- # gen_fio_conf 00:32:38.966 12:10:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:38.966 12:10:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:38.966 12:10:32 -- target/dif.sh@54 -- # local file 00:32:38.966 12:10:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:38.966 12:10:32 -- target/dif.sh@56 -- # cat 00:32:38.966 12:10:32 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:38.966 12:10:32 -- common/autotest_common.sh@1320 -- # shift 00:32:38.966 12:10:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:38.966 12:10:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:38.966 12:10:32 -- nvmf/common.sh@542 -- # cat 00:32:38.966 12:10:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:38.966 12:10:32 -- target/dif.sh@72 -- # (( file <= files )) 00:32:38.966 12:10:32 -- target/dif.sh@73 -- # cat 00:32:38.966 12:10:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:38.966 12:10:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:38.966 12:10:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:38.966 12:10:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:38.966 12:10:32 -- target/dif.sh@72 -- # (( file++ )) 00:32:38.966 12:10:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:38.966 { 00:32:38.966 "params": { 00:32:38.966 "name": "Nvme$subsystem", 00:32:38.966 "trtype": "$TEST_TRANSPORT", 00:32:38.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.966 "adrfam": "ipv4", 00:32:38.966 "trsvcid": "$NVMF_PORT", 00:32:38.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.966 "hdgst": ${hdgst:-false}, 00:32:38.966 "ddgst": ${ddgst:-false} 00:32:38.966 }, 00:32:38.966 "method": "bdev_nvme_attach_controller" 00:32:38.966 } 00:32:38.966 EOF 00:32:38.966 )") 00:32:38.966 12:10:32 -- target/dif.sh@72 -- # (( file <= files )) 00:32:38.966 12:10:32 -- nvmf/common.sh@542 -- # cat 00:32:38.966 12:10:32 -- nvmf/common.sh@544 -- # jq . 00:32:38.966 12:10:32 -- nvmf/common.sh@545 -- # IFS=, 00:32:38.966 12:10:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:38.966 "params": { 00:32:38.966 "name": "Nvme0", 00:32:38.966 "trtype": "tcp", 00:32:38.966 "traddr": "10.0.0.2", 00:32:38.966 "adrfam": "ipv4", 00:32:38.966 "trsvcid": "4420", 00:32:38.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:38.966 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:38.966 "hdgst": false, 00:32:38.966 "ddgst": false 00:32:38.966 }, 00:32:38.966 "method": "bdev_nvme_attach_controller" 00:32:38.966 },{ 00:32:38.966 "params": { 00:32:38.966 "name": "Nvme1", 00:32:38.966 "trtype": "tcp", 00:32:38.966 "traddr": "10.0.0.2", 00:32:38.966 "adrfam": "ipv4", 00:32:38.966 "trsvcid": "4420", 00:32:38.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:38.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:38.966 "hdgst": false, 00:32:38.966 "ddgst": false 00:32:38.966 }, 00:32:38.966 "method": "bdev_nvme_attach_controller" 00:32:38.966 }' 00:32:38.966 12:10:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:38.966 12:10:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:38.966 12:10:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:38.966 12:10:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:38.966 12:10:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:38.966 12:10:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:38.966 12:10:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:38.966 12:10:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:38.966 12:10:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:38.966 12:10:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:39.276 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:39.276 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:39.276 fio-3.35 00:32:39.276 Starting 2 threads 00:32:39.276 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.861 [2024-06-10 12:10:33.371779] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:39.861 [2024-06-10 12:10:33.371825] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:49.861 00:32:49.861 filename0: (groupid=0, jobs=1): err= 0: pid=2175589: Mon Jun 10 12:10:43 2024 00:32:49.861 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10041msec) 00:32:49.861 slat (nsec): min=5346, max=49908, avg=6295.13, stdev=2329.49 00:32:49.861 clat (usec): min=41882, max=43229, avg=41992.22, stdev=106.88 00:32:49.861 lat (usec): min=41890, max=43263, avg=41998.51, stdev=107.42 00:32:49.861 clat percentiles (usec): 00:32:49.861 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:32:49.861 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:49.861 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:49.861 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:32:49.861 | 99.99th=[43254] 00:32:49.861 bw ( KiB/s): min= 352, max= 384, per=33.87%, avg=380.80, stdev= 9.85, samples=20 00:32:49.861 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:32:49.861 lat (msec) : 50=100.00% 00:32:49.861 cpu : usr=97.36%, sys=2.42%, ctx=12, majf=0, minf=139 00:32:49.861 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.861 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.861 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:49.861 filename1: (groupid=0, jobs=1): err= 0: pid=2175590: Mon Jun 10 12:10:43 2024 00:32:49.861 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10020msec) 00:32:49.861 slat (nsec): min=5337, max=38303, avg=6186.86, stdev=1560.92 00:32:49.861 clat (usec): min=939, max=43165, avg=21529.82, stdev=20274.89 00:32:49.861 lat (usec): min=947, max=43204, avg=21536.00, stdev=20274.85 00:32:49.861 clat percentiles (usec): 00:32:49.861 | 1.00th=[ 988], 5.00th=[ 1139], 10.00th=[ 1156], 20.00th=[ 1205], 00:32:49.861 | 30.00th=[ 1254], 40.00th=[ 1287], 50.00th=[41157], 60.00th=[41681], 00:32:49.861 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:32:49.861 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:32:49.861 | 99.99th=[43254] 00:32:49.861 bw ( KiB/s): min= 704, max= 768, per=66.14%, avg=742.40, stdev=30.45, samples=20 00:32:49.861 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:32:49.861 lat (usec) : 1000=1.61% 00:32:49.861 lat (msec) : 2=48.28%, 50=50.11% 00:32:49.861 cpu : usr=97.45%, sys=2.33%, ctx=11, majf=0, minf=179 00:32:49.861 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.861 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.861 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:49.861 00:32:49.861 Run status group 0 (all jobs): 00:32:49.861 READ: bw=1122KiB/s (1149kB/s), 381KiB/s-743KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10020-10041msec 00:32:50.122 12:10:43 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:50.122 12:10:43 -- target/dif.sh@43 -- # local sub 00:32:50.122 12:10:43 -- target/dif.sh@45 -- # for sub in "$@" 00:32:50.122 12:10:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:50.122 12:10:43 -- target/dif.sh@36 -- # local sub_id=0 00:32:50.122 12:10:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:50.122 12:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.122 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.122 12:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.122 12:10:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:50.122 12:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.122 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.122 12:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.122 12:10:43 -- target/dif.sh@45 -- # for sub in "$@" 00:32:50.122 12:10:43 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:50.122 12:10:43 -- target/dif.sh@36 -- # local sub_id=1 00:32:50.122 12:10:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:50.122 12:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.122 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.122 12:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.122 12:10:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:50.122 12:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.122 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.122 12:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.122 00:32:50.122 real 0m11.255s 00:32:50.122 user 0m34.614s 00:32:50.122 sys 0m0.818s 00:32:50.122 12:10:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:50.122 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.122 ************************************ 00:32:50.122 END TEST fio_dif_1_multi_subsystems 00:32:50.122 ************************************ 00:32:50.123 12:10:43 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:50.123 12:10:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:50.123 12:10:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:50.123 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.123 ************************************ 00:32:50.123 START TEST fio_dif_rand_params 00:32:50.123 ************************************ 00:32:50.123 12:10:43 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:32:50.123 12:10:43 -- target/dif.sh@100 -- # local NULL_DIF 00:32:50.123 12:10:43 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:50.123 12:10:43 -- target/dif.sh@103 -- # NULL_DIF=3 00:32:50.123 12:10:43 -- target/dif.sh@103 -- # bs=128k 00:32:50.123 12:10:43 -- target/dif.sh@103 -- # numjobs=3 00:32:50.123 12:10:43 -- target/dif.sh@103 -- # iodepth=3 00:32:50.123 12:10:43 -- target/dif.sh@103 -- # runtime=5 00:32:50.123 12:10:43 -- target/dif.sh@105 -- # create_subsystems 0 00:32:50.123 12:10:43 -- target/dif.sh@28 -- # local sub 00:32:50.123 12:10:43 -- target/dif.sh@30 -- # for sub in "$@" 00:32:50.123 12:10:43 -- target/dif.sh@31 -- # create_subsystem 0 00:32:50.123 12:10:43 -- target/dif.sh@18 -- # local sub_id=0 00:32:50.123 12:10:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:50.123 12:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.123 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.123 bdev_null0 00:32:50.123 12:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.123 12:10:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:50.123 12:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.123 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.123 12:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.123 12:10:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:50.123 12:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.123 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.123 12:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.123 12:10:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:50.123 12:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.123 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:32:50.123 [2024-06-10 12:10:43.778548] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.123 12:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.123 12:10:43 -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:50.123 12:10:43 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:50.123 12:10:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:50.123 12:10:43 -- nvmf/common.sh@520 -- # config=() 00:32:50.123 12:10:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.123 12:10:43 -- nvmf/common.sh@520 -- # local subsystem config 00:32:50.123 12:10:43 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.123 12:10:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:50.123 12:10:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:50.123 { 00:32:50.123 "params": { 00:32:50.123 "name": "Nvme$subsystem", 00:32:50.123 "trtype": "$TEST_TRANSPORT", 00:32:50.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:50.123 "adrfam": "ipv4", 00:32:50.123 "trsvcid": "$NVMF_PORT", 00:32:50.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:50.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:50.123 "hdgst": ${hdgst:-false}, 00:32:50.123 "ddgst": ${ddgst:-false} 00:32:50.123 }, 00:32:50.123 "method": "bdev_nvme_attach_controller" 00:32:50.123 } 00:32:50.123 EOF 00:32:50.123 )") 00:32:50.123 12:10:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:50.123 12:10:43 -- target/dif.sh@82 -- # gen_fio_conf 00:32:50.123 12:10:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:50.123 12:10:43 -- target/dif.sh@54 -- # local file 00:32:50.123 12:10:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:50.123 12:10:43 -- target/dif.sh@56 -- # cat 00:32:50.123 12:10:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:50.123 12:10:43 -- common/autotest_common.sh@1320 -- # shift 00:32:50.123 12:10:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:50.123 12:10:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.123 12:10:43 -- nvmf/common.sh@542 -- # cat 00:32:50.123 12:10:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:50.123 12:10:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:50.123 12:10:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:50.123 12:10:43 -- target/dif.sh@72 -- # (( file <= files )) 00:32:50.123 12:10:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:50.123 12:10:43 -- nvmf/common.sh@544 -- # jq . 00:32:50.123 12:10:43 -- nvmf/common.sh@545 -- # IFS=, 00:32:50.123 12:10:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:50.123 "params": { 00:32:50.123 "name": "Nvme0", 00:32:50.123 "trtype": "tcp", 00:32:50.123 "traddr": "10.0.0.2", 00:32:50.123 "adrfam": "ipv4", 00:32:50.123 "trsvcid": "4420", 00:32:50.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:50.123 "hdgst": false, 00:32:50.123 "ddgst": false 00:32:50.123 }, 00:32:50.123 "method": "bdev_nvme_attach_controller" 00:32:50.123 }' 00:32:50.123 12:10:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:50.123 12:10:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:50.123 12:10:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.123 12:10:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:50.123 12:10:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:50.123 12:10:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:50.123 12:10:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:50.123 12:10:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:50.123 12:10:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:50.123 12:10:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.718 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:50.718 ... 00:32:50.718 fio-3.35 00:32:50.718 Starting 3 threads 00:32:50.718 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.979 [2024-06-10 12:10:44.580259] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:50.979 [2024-06-10 12:10:44.580304] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:56.262 00:32:56.262 filename0: (groupid=0, jobs=1): err= 0: pid=2177809: Mon Jun 10 12:10:49 2024 00:32:56.262 read: IOPS=159, BW=20.0MiB/s (21.0MB/s)(101MiB/5036msec) 00:32:56.262 slat (nsec): min=5374, max=54466, avg=7857.07, stdev=2438.55 00:32:56.262 clat (usec): min=5243, max=91428, avg=18752.08, stdev=17011.05 00:32:56.262 lat (usec): min=5251, max=91434, avg=18759.94, stdev=17010.78 00:32:56.262 clat percentiles (usec): 00:32:56.262 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8848], 00:32:56.262 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10421], 60.00th=[11076], 00:32:56.262 | 70.00th=[12125], 80.00th=[48497], 90.00th=[50070], 95.00th=[51119], 00:32:56.262 | 99.00th=[53740], 99.50th=[54789], 99.90th=[91751], 99.95th=[91751], 00:32:56.262 | 99.99th=[91751] 00:32:56.262 bw ( KiB/s): min=12032, max=35072, per=37.44%, avg=20531.20, stdev=6282.06, samples=10 00:32:56.262 iops : min= 94, max= 274, avg=160.40, stdev=49.08, samples=10 00:32:56.262 lat (msec) : 10=40.12%, 20=38.63%, 50=8.94%, 100=12.30% 00:32:56.262 cpu : usr=96.27%, sys=3.48%, ctx=12, majf=0, minf=120 00:32:56.262 IO depths : 1=9.8%, 2=90.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:56.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.262 issued rwts: total=805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.262 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:56.262 filename0: (groupid=0, jobs=1): err= 0: pid=2177810: Mon Jun 10 12:10:49 2024 00:32:56.262 read: IOPS=139, BW=17.4MiB/s (18.3MB/s)(87.4MiB/5008msec) 00:32:56.262 slat (nsec): min=5346, max=33861, avg=8070.77, stdev=2023.28 00:32:56.262 clat (usec): min=6306, max=94350, avg=21473.44, stdev=20269.92 00:32:56.262 lat (usec): min=6314, max=94359, avg=21481.51, stdev=20270.09 00:32:56.262 clat percentiles (usec): 00:32:56.263 | 1.00th=[ 7046], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8979], 00:32:56.263 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10945], 60.00th=[11863], 00:32:56.263 | 70.00th=[13042], 80.00th=[49546], 90.00th=[51119], 95.00th=[52691], 00:32:56.263 | 99.00th=[90702], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:32:56.263 | 99.99th=[93848] 00:32:56.263 bw ( KiB/s): min= 8448, max=23296, per=32.54%, avg=17843.20, stdev=4667.74, samples=10 00:32:56.263 iops : min= 66, max= 182, avg=139.40, stdev=36.47, samples=10 00:32:56.263 lat (msec) : 10=38.20%, 20=36.34%, 50=9.30%, 100=16.17% 00:32:56.263 cpu : usr=97.16%, sys=2.60%, ctx=11, majf=0, minf=93 00:32:56.263 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.263 issued rwts: total=699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.263 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:56.263 filename0: (groupid=0, jobs=1): err= 0: pid=2177811: Mon Jun 10 12:10:49 2024 00:32:56.263 read: IOPS=130, BW=16.3MiB/s (17.1MB/s)(82.4MiB/5049msec) 00:32:56.263 slat (nsec): min=5343, max=31398, avg=7686.28, stdev=1991.62 00:32:56.263 clat (usec): min=7025, max=92621, avg=22904.86, stdev=19401.84 00:32:56.263 lat (usec): min=7033, max=92630, avg=22912.55, stdev=19401.84 00:32:56.263 clat percentiles (usec): 00:32:56.263 | 1.00th=[ 7439], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9241], 00:32:56.263 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11469], 60.00th=[12387], 00:32:56.263 | 70.00th=[47973], 80.00th=[50070], 90.00th=[51119], 95.00th=[52691], 00:32:56.263 | 99.00th=[57934], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:32:56.263 | 99.99th=[92799] 00:32:56.263 bw ( KiB/s): min=11264, max=19456, per=30.62%, avg=16793.60, stdev=2983.50, samples=10 00:32:56.263 iops : min= 88, max= 152, avg=131.20, stdev=23.31, samples=10 00:32:56.263 lat (msec) : 10=32.47%, 20=37.48%, 50=10.93%, 100=19.12% 00:32:56.263 cpu : usr=96.63%, sys=3.11%, ctx=10, majf=0, minf=98 00:32:56.263 IO depths : 1=6.4%, 2=93.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.263 issued rwts: total=659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.263 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:56.263 00:32:56.263 Run status group 0 (all jobs): 00:32:56.263 READ: bw=53.5MiB/s (56.2MB/s), 16.3MiB/s-20.0MiB/s (17.1MB/s-21.0MB/s), io=270MiB (284MB), run=5008-5049msec 00:32:56.263 12:10:49 -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:56.263 12:10:49 -- target/dif.sh@43 -- # local sub 00:32:56.263 12:10:49 -- target/dif.sh@45 -- # for sub in "$@" 00:32:56.263 12:10:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:56.263 12:10:49 -- target/dif.sh@36 -- # local sub_id=0 00:32:56.263 12:10:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:56.263 12:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 12:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:56.263 12:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 12:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:49 -- target/dif.sh@109 -- # NULL_DIF=2 00:32:56.263 12:10:49 -- target/dif.sh@109 -- # bs=4k 00:32:56.263 12:10:49 -- target/dif.sh@109 -- # numjobs=8 00:32:56.263 12:10:49 -- target/dif.sh@109 -- # iodepth=16 00:32:56.263 12:10:49 -- target/dif.sh@109 -- # runtime= 00:32:56.263 12:10:49 -- target/dif.sh@109 -- # files=2 00:32:56.263 12:10:49 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:56.263 12:10:49 -- target/dif.sh@28 -- # local sub 00:32:56.263 12:10:49 -- target/dif.sh@30 -- # for sub in "$@" 00:32:56.263 12:10:49 -- target/dif.sh@31 -- # create_subsystem 0 00:32:56.263 12:10:49 -- target/dif.sh@18 -- # local sub_id=0 00:32:56.263 12:10:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:56.263 12:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 bdev_null0 00:32:56.263 12:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:56.263 12:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 12:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:56.263 12:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 12:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:56.263 12:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 [2024-06-10 12:10:49.969830] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.263 12:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:49 -- target/dif.sh@30 -- # for sub in "$@" 00:32:56.263 12:10:49 -- target/dif.sh@31 -- # create_subsystem 1 00:32:56.263 12:10:49 -- target/dif.sh@18 -- # local sub_id=1 00:32:56.263 12:10:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:56.263 12:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 bdev_null1 00:32:56.263 12:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:56.263 12:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 12:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:56.263 12:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 12:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.263 12:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:50 -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 12:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.263 12:10:50 -- target/dif.sh@30 -- # for sub in "$@" 00:32:56.263 12:10:50 -- target/dif.sh@31 -- # create_subsystem 2 00:32:56.263 12:10:50 -- target/dif.sh@18 -- # local sub_id=2 00:32:56.263 12:10:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:56.263 12:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.263 12:10:50 -- common/autotest_common.sh@10 -- # set +x 00:32:56.525 bdev_null2 00:32:56.525 12:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.525 12:10:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:56.525 12:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.525 12:10:50 -- common/autotest_common.sh@10 -- # set +x 00:32:56.525 12:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.525 12:10:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:56.525 12:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.525 12:10:50 -- common/autotest_common.sh@10 -- # set +x 00:32:56.525 12:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.525 12:10:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:56.525 12:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.525 12:10:50 -- common/autotest_common.sh@10 -- # set +x 00:32:56.525 12:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.525 12:10:50 -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:56.525 12:10:50 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:56.525 12:10:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:56.525 12:10:50 -- nvmf/common.sh@520 -- # config=() 00:32:56.525 12:10:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:56.525 12:10:50 -- nvmf/common.sh@520 -- # local subsystem config 00:32:56.525 12:10:50 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:56.525 12:10:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:56.525 12:10:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:56.525 { 00:32:56.525 "params": { 00:32:56.525 "name": "Nvme$subsystem", 00:32:56.525 "trtype": "$TEST_TRANSPORT", 00:32:56.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:56.525 "adrfam": "ipv4", 00:32:56.525 "trsvcid": "$NVMF_PORT", 00:32:56.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:56.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:56.525 "hdgst": ${hdgst:-false}, 00:32:56.525 "ddgst": ${ddgst:-false} 00:32:56.526 }, 00:32:56.526 "method": "bdev_nvme_attach_controller" 00:32:56.526 } 00:32:56.526 EOF 00:32:56.526 )") 00:32:56.526 12:10:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:56.526 12:10:50 -- target/dif.sh@82 -- # gen_fio_conf 00:32:56.526 12:10:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:56.526 12:10:50 -- target/dif.sh@54 -- # local file 00:32:56.526 12:10:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:56.526 12:10:50 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:56.526 12:10:50 -- target/dif.sh@56 -- # cat 00:32:56.526 12:10:50 -- common/autotest_common.sh@1320 -- # shift 00:32:56.526 12:10:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:56.526 12:10:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:56.526 12:10:50 -- nvmf/common.sh@542 -- # cat 00:32:56.526 12:10:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:56.526 12:10:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:56.526 12:10:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:56.526 12:10:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:56.526 12:10:50 -- target/dif.sh@72 -- # (( file <= files )) 00:32:56.526 12:10:50 -- target/dif.sh@73 -- # cat 00:32:56.526 12:10:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:56.526 12:10:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:56.526 { 00:32:56.526 "params": { 00:32:56.526 "name": "Nvme$subsystem", 00:32:56.526 "trtype": "$TEST_TRANSPORT", 00:32:56.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:56.526 "adrfam": "ipv4", 00:32:56.526 "trsvcid": "$NVMF_PORT", 00:32:56.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:56.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:56.526 "hdgst": ${hdgst:-false}, 00:32:56.526 "ddgst": ${ddgst:-false} 00:32:56.526 }, 00:32:56.526 "method": "bdev_nvme_attach_controller" 00:32:56.526 } 00:32:56.526 EOF 00:32:56.526 )") 00:32:56.526 12:10:50 -- target/dif.sh@72 -- # (( file++ )) 00:32:56.526 12:10:50 -- target/dif.sh@72 -- # (( file <= files )) 00:32:56.526 12:10:50 -- nvmf/common.sh@542 -- # cat 00:32:56.526 12:10:50 -- target/dif.sh@73 -- # cat 00:32:56.526 12:10:50 -- target/dif.sh@72 -- # (( file++ )) 00:32:56.526 12:10:50 -- target/dif.sh@72 -- # (( file <= files )) 00:32:56.526 12:10:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:56.526 12:10:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:56.526 { 00:32:56.526 "params": { 00:32:56.526 "name": "Nvme$subsystem", 00:32:56.526 "trtype": "$TEST_TRANSPORT", 00:32:56.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:56.526 "adrfam": "ipv4", 00:32:56.526 "trsvcid": "$NVMF_PORT", 00:32:56.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:56.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:56.526 "hdgst": ${hdgst:-false}, 00:32:56.526 "ddgst": ${ddgst:-false} 00:32:56.526 }, 00:32:56.526 "method": "bdev_nvme_attach_controller" 00:32:56.526 } 00:32:56.526 EOF 00:32:56.526 )") 00:32:56.526 12:10:50 -- nvmf/common.sh@542 -- # cat 00:32:56.526 12:10:50 -- nvmf/common.sh@544 -- # jq . 00:32:56.526 12:10:50 -- nvmf/common.sh@545 -- # IFS=, 00:32:56.526 12:10:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:56.526 "params": { 00:32:56.526 "name": "Nvme0", 00:32:56.526 "trtype": "tcp", 00:32:56.526 "traddr": "10.0.0.2", 00:32:56.526 "adrfam": "ipv4", 00:32:56.526 "trsvcid": "4420", 00:32:56.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:56.526 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:56.526 "hdgst": false, 00:32:56.526 "ddgst": false 00:32:56.526 }, 00:32:56.526 "method": "bdev_nvme_attach_controller" 00:32:56.526 },{ 00:32:56.526 "params": { 00:32:56.526 "name": "Nvme1", 00:32:56.526 "trtype": "tcp", 00:32:56.526 "traddr": "10.0.0.2", 00:32:56.526 "adrfam": "ipv4", 00:32:56.526 "trsvcid": "4420", 00:32:56.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:56.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:56.526 "hdgst": false, 00:32:56.526 "ddgst": false 00:32:56.526 }, 00:32:56.526 "method": "bdev_nvme_attach_controller" 00:32:56.526 },{ 00:32:56.526 "params": { 00:32:56.526 "name": "Nvme2", 00:32:56.526 "trtype": "tcp", 00:32:56.526 "traddr": "10.0.0.2", 00:32:56.526 "adrfam": "ipv4", 00:32:56.526 "trsvcid": "4420", 00:32:56.526 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:56.526 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:56.526 "hdgst": false, 00:32:56.526 "ddgst": false 00:32:56.526 }, 00:32:56.526 "method": "bdev_nvme_attach_controller" 00:32:56.526 }' 00:32:56.526 12:10:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:56.526 12:10:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:56.526 12:10:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:56.526 12:10:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:56.526 12:10:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:56.526 12:10:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:56.526 12:10:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:56.526 12:10:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:56.526 12:10:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:56.526 12:10:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:56.788 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:56.788 ... 00:32:56.788 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:56.788 ... 00:32:56.788 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:56.788 ... 00:32:56.788 fio-3.35 00:32:56.788 Starting 24 threads 00:32:56.788 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.732 [2024-06-10 12:10:51.404068] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:57.732 [2024-06-10 12:10:51.404110] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:09.964 00:33:09.964 filename0: (groupid=0, jobs=1): err= 0: pid=2179333: Mon Jun 10 12:11:01 2024 00:33:09.964 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.6MiB/10014msec) 00:33:09.964 slat (nsec): min=5499, max=76374, avg=13013.66, stdev=9144.04 00:33:09.964 clat (usec): min=12440, max=38578, avg=30321.77, stdev=1385.99 00:33:09.964 lat (usec): min=12446, max=38585, avg=30334.78, stdev=1385.80 00:33:09.964 clat percentiles (usec): 00:33:09.964 | 1.00th=[26084], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.964 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.964 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.964 | 99.00th=[31851], 99.50th=[32113], 99.90th=[38536], 99.95th=[38536], 00:33:09.964 | 99.99th=[38536] 00:33:09.964 bw ( KiB/s): min= 2043, max= 2176, per=4.18%, avg=2101.11, stdev=64.55, samples=19 00:33:09.964 iops : min= 510, max= 544, avg=525.16, stdev=16.09, samples=19 00:33:09.964 lat (msec) : 20=0.44%, 50=99.56% 00:33:09.964 cpu : usr=97.62%, sys=1.38%, ctx=66, majf=0, minf=32 00:33:09.964 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:09.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.964 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.965 filename0: (groupid=0, jobs=1): err= 0: pid=2179334: Mon Jun 10 12:11:01 2024 00:33:09.965 read: IOPS=532, BW=2131KiB/s (2183kB/s)(20.8MiB/10010msec) 00:33:09.965 slat (nsec): min=5510, max=66228, avg=12085.47, stdev=7688.65 00:33:09.965 clat (usec): min=1732, max=38655, avg=29927.00, stdev=3579.19 00:33:09.965 lat (usec): min=1744, max=38662, avg=29939.09, stdev=3578.63 00:33:09.965 clat percentiles (usec): 00:33:09.965 | 1.00th=[ 4228], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.965 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.965 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.965 | 99.00th=[31589], 99.50th=[31851], 99.90th=[38536], 99.95th=[38536], 00:33:09.965 | 99.99th=[38536] 00:33:09.965 bw ( KiB/s): min= 2043, max= 2736, per=4.23%, avg=2130.32, stdev=159.31, samples=19 00:33:09.965 iops : min= 510, max= 684, avg=532.42, stdev=39.85, samples=19 00:33:09.965 lat (msec) : 2=0.11%, 4=0.82%, 10=0.67%, 20=0.60%, 50=97.79% 00:33:09.965 cpu : usr=99.26%, sys=0.45%, ctx=15, majf=0, minf=54 00:33:09.965 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:09.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 issued rwts: total=5334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.965 filename0: (groupid=0, jobs=1): err= 0: pid=2179335: Mon Jun 10 12:11:01 2024 00:33:09.965 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10007msec) 00:33:09.965 slat (nsec): min=5521, max=65538, avg=12879.25, stdev=8510.37 00:33:09.965 clat (usec): min=4184, max=38660, avg=30122.77, stdev=2653.92 00:33:09.965 lat (usec): min=4206, max=38672, avg=30135.65, stdev=2653.34 00:33:09.965 clat percentiles (usec): 00:33:09.965 | 1.00th=[13960], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.965 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.965 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.965 | 99.00th=[31589], 99.50th=[31851], 99.90th=[38536], 99.95th=[38536], 00:33:09.965 | 99.99th=[38536] 00:33:09.965 bw ( KiB/s): min= 2048, max= 2436, per=4.21%, avg=2115.26, stdev=99.39, samples=19 00:33:09.965 iops : min= 512, max= 609, avg=528.74, stdev=24.80, samples=19 00:33:09.965 lat (msec) : 10=0.60%, 20=0.91%, 50=98.49% 00:33:09.965 cpu : usr=99.14%, sys=0.56%, ctx=10, majf=0, minf=24 00:33:09.965 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:09.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.965 filename0: (groupid=0, jobs=1): err= 0: pid=2179336: Mon Jun 10 12:11:01 2024 00:33:09.965 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10005msec) 00:33:09.965 slat (nsec): min=5501, max=90400, avg=16903.48, stdev=13910.99 00:33:09.965 clat (usec): min=19953, max=42643, avg=30457.36, stdev=1255.18 00:33:09.965 lat (usec): min=19958, max=42660, avg=30474.26, stdev=1255.29 00:33:09.965 clat percentiles (usec): 00:33:09.965 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.965 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:09.965 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.965 | 99.00th=[36439], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:33:09.965 | 99.99th=[42730] 00:33:09.965 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2087.11, stdev=59.75, samples=19 00:33:09.965 iops : min= 510, max= 544, avg=521.58, stdev=14.74, samples=19 00:33:09.965 lat (msec) : 20=0.04%, 50=99.96% 00:33:09.965 cpu : usr=99.30%, sys=0.42%, ctx=12, majf=0, minf=37 00:33:09.965 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:09.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.965 filename0: (groupid=0, jobs=1): err= 0: pid=2179337: Mon Jun 10 12:11:01 2024 00:33:09.965 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10004msec) 00:33:09.965 slat (usec): min=5, max=110, avg=16.62, stdev=15.87 00:33:09.965 clat (usec): min=27318, max=41449, avg=30456.42, stdev=975.11 00:33:09.965 lat (usec): min=27333, max=41465, avg=30473.04, stdev=974.13 00:33:09.965 clat percentiles (usec): 00:33:09.965 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.965 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:09.965 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.965 | 99.00th=[32113], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:33:09.965 | 99.99th=[41681] 00:33:09.965 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2087.58, stdev=59.41, samples=19 00:33:09.965 iops : min= 512, max= 544, avg=521.74, stdev=14.62, samples=19 00:33:09.965 lat (msec) : 50=100.00% 00:33:09.965 cpu : usr=97.75%, sys=1.21%, ctx=94, majf=0, minf=29 00:33:09.965 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:09.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.965 filename0: (groupid=0, jobs=1): err= 0: pid=2179338: Mon Jun 10 12:11:01 2024 00:33:09.965 read: IOPS=508, BW=2032KiB/s (2081kB/s)(19.9MiB/10011msec) 00:33:09.965 slat (usec): min=5, max=126, avg=17.83, stdev=15.76 00:33:09.965 clat (usec): min=11442, max=66615, avg=31383.08, stdev=5313.83 00:33:09.965 lat (usec): min=11449, max=66646, avg=31400.90, stdev=5313.17 00:33:09.965 clat percentiles (usec): 00:33:09.965 | 1.00th=[19792], 5.00th=[23462], 10.00th=[26870], 20.00th=[29754], 00:33:09.965 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.965 | 70.00th=[30802], 80.00th=[32375], 90.00th=[37487], 95.00th=[40633], 00:33:09.965 | 99.00th=[49021], 99.50th=[50070], 99.90th=[57410], 99.95th=[66323], 00:33:09.965 | 99.99th=[66847] 00:33:09.965 bw ( KiB/s): min= 1840, max= 2176, per=4.05%, avg=2039.47, stdev=89.12, samples=19 00:33:09.965 iops : min= 460, max= 544, avg=509.79, stdev=22.28, samples=19 00:33:09.965 lat (msec) : 20=1.30%, 50=98.11%, 100=0.59% 00:33:09.965 cpu : usr=97.94%, sys=1.06%, ctx=42, majf=0, minf=27 00:33:09.965 IO depths : 1=1.6%, 2=3.1%, 4=9.5%, 8=72.3%, 16=13.6%, 32=0.0%, >=64=0.0% 00:33:09.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 complete : 0=0.0%, 4=90.5%, 8=6.4%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.965 issued rwts: total=5086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.965 filename0: (groupid=0, jobs=1): err= 0: pid=2179339: Mon Jun 10 12:11:01 2024 00:33:09.965 read: IOPS=528, BW=2113KiB/s (2163kB/s)(20.6MiB/10005msec) 00:33:09.965 slat (nsec): min=5494, max=99453, avg=17192.02, stdev=15877.25 00:33:09.965 clat (usec): min=9492, max=47467, avg=30177.76, stdev=2741.05 00:33:09.965 lat (usec): min=9499, max=47484, avg=30194.95, stdev=2741.98 00:33:09.966 clat percentiles (usec): 00:33:09.966 | 1.00th=[18744], 5.00th=[27132], 10.00th=[29492], 20.00th=[30016], 00:33:09.966 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.966 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.966 | 99.00th=[40109], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:33:09.966 | 99.99th=[47449] 00:33:09.966 bw ( KiB/s): min= 1920, max= 2272, per=4.18%, avg=2102.53, stdev=73.79, samples=19 00:33:09.966 iops : min= 480, max= 568, avg=525.47, stdev=18.49, samples=19 00:33:09.966 lat (msec) : 10=0.17%, 20=1.55%, 50=98.28% 00:33:09.966 cpu : usr=99.26%, sys=0.40%, ctx=43, majf=0, minf=40 00:33:09.966 IO depths : 1=1.3%, 2=4.1%, 4=12.0%, 8=68.4%, 16=14.3%, 32=0.0%, >=64=0.0% 00:33:09.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 complete : 0=0.0%, 4=91.6%, 8=5.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 issued rwts: total=5284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.966 filename0: (groupid=0, jobs=1): err= 0: pid=2179340: Mon Jun 10 12:11:01 2024 00:33:09.966 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10006msec) 00:33:09.966 slat (usec): min=5, max=110, avg=23.46, stdev=20.39 00:33:09.966 clat (usec): min=11517, max=44132, avg=30403.73, stdev=1395.05 00:33:09.966 lat (usec): min=11525, max=44218, avg=30427.19, stdev=1394.02 00:33:09.966 clat percentiles (usec): 00:33:09.966 | 1.00th=[27919], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.966 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:09.966 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31327], 00:33:09.966 | 99.00th=[34866], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:33:09.966 | 99.99th=[44303] 00:33:09.966 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2087.37, stdev=59.55, samples=19 00:33:09.966 iops : min= 512, max= 544, avg=521.68, stdev=14.66, samples=19 00:33:09.966 lat (msec) : 20=0.11%, 50=99.89% 00:33:09.966 cpu : usr=98.88%, sys=0.81%, ctx=18, majf=0, minf=28 00:33:09.966 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:09.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.966 filename1: (groupid=0, jobs=1): err= 0: pid=2179341: Mon Jun 10 12:11:01 2024 00:33:09.966 read: IOPS=537, BW=2152KiB/s (2204kB/s)(21.0MiB/10004msec) 00:33:09.966 slat (nsec): min=5526, max=97268, avg=25598.79, stdev=18488.87 00:33:09.966 clat (usec): min=4123, max=51230, avg=29521.94, stdev=3912.35 00:33:09.966 lat (usec): min=4140, max=51237, avg=29547.54, stdev=3914.42 00:33:09.966 clat percentiles (usec): 00:33:09.966 | 1.00th=[ 7242], 5.00th=[22938], 10.00th=[27132], 20.00th=[29754], 00:33:09.966 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:09.966 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31851], 00:33:09.966 | 99.00th=[38536], 99.50th=[41157], 99.90th=[50070], 99.95th=[50594], 00:33:09.966 | 99.99th=[51119] 00:33:09.966 bw ( KiB/s): min= 1968, max= 2544, per=4.28%, avg=2151.26, stdev=129.32, samples=19 00:33:09.966 iops : min= 492, max= 636, avg=537.74, stdev=32.32, samples=19 00:33:09.966 lat (msec) : 10=1.19%, 20=2.06%, 50=96.56%, 100=0.19% 00:33:09.966 cpu : usr=97.98%, sys=1.17%, ctx=650, majf=0, minf=59 00:33:09.966 IO depths : 1=4.0%, 2=8.3%, 4=18.6%, 8=59.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:33:09.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 complete : 0=0.0%, 4=92.7%, 8=2.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 issued rwts: total=5382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.966 filename1: (groupid=0, jobs=1): err= 0: pid=2179342: Mon Jun 10 12:11:01 2024 00:33:09.966 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:33:09.966 slat (nsec): min=5367, max=72553, avg=14164.22, stdev=9235.60 00:33:09.966 clat (usec): min=14809, max=38624, avg=30372.67, stdev=1205.06 00:33:09.966 lat (usec): min=14814, max=38631, avg=30386.83, stdev=1205.60 00:33:09.966 clat percentiles (usec): 00:33:09.966 | 1.00th=[28967], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:09.966 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:09.966 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.966 | 99.00th=[32113], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:33:09.966 | 99.99th=[38536] 00:33:09.966 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2093.84, stdev=63.39, samples=19 00:33:09.966 iops : min= 510, max= 544, avg=523.26, stdev=15.85, samples=19 00:33:09.966 lat (msec) : 20=0.30%, 50=99.70% 00:33:09.966 cpu : usr=99.14%, sys=0.53%, ctx=21, majf=0, minf=24 00:33:09.966 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:09.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.966 filename1: (groupid=0, jobs=1): err= 0: pid=2179343: Mon Jun 10 12:11:01 2024 00:33:09.966 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10012msec) 00:33:09.966 slat (nsec): min=5502, max=90840, avg=9689.09, stdev=8720.58 00:33:09.966 clat (usec): min=19279, max=45298, avg=30450.36, stdev=1703.92 00:33:09.966 lat (usec): min=19287, max=45304, avg=30460.05, stdev=1703.85 00:33:09.966 clat percentiles (usec): 00:33:09.966 | 1.00th=[23725], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.966 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.966 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.966 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:33:09.966 | 99.99th=[45351] 00:33:09.966 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2094.37, stdev=58.63, samples=19 00:33:09.966 iops : min= 510, max= 544, avg=523.47, stdev=14.68, samples=19 00:33:09.966 lat (msec) : 20=0.04%, 50=99.96% 00:33:09.966 cpu : usr=99.33%, sys=0.38%, ctx=9, majf=0, minf=42 00:33:09.966 IO depths : 1=3.6%, 2=9.7%, 4=24.5%, 8=53.3%, 16=8.9%, 32=0.0%, >=64=0.0% 00:33:09.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.966 filename1: (groupid=0, jobs=1): err= 0: pid=2179344: Mon Jun 10 12:11:01 2024 00:33:09.966 read: IOPS=511, BW=2045KiB/s (2094kB/s)(20.0MiB/10005msec) 00:33:09.966 slat (usec): min=5, max=108, avg=15.86, stdev=15.69 00:33:09.966 clat (usec): min=11827, max=55766, avg=31221.93, stdev=5784.72 00:33:09.966 lat (usec): min=11838, max=55772, avg=31237.79, stdev=5784.54 00:33:09.966 clat percentiles (usec): 00:33:09.966 | 1.00th=[13173], 5.00th=[22414], 10.00th=[26084], 20.00th=[29754], 00:33:09.966 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.966 | 70.00th=[31065], 80.00th=[32637], 90.00th=[38011], 95.00th=[42206], 00:33:09.966 | 99.00th=[50070], 99.50th=[50594], 99.90th=[54789], 99.95th=[55837], 00:33:09.966 | 99.99th=[55837] 00:33:09.966 bw ( KiB/s): min= 1888, max= 2187, per=4.07%, avg=2047.53, stdev=86.00, samples=19 00:33:09.966 iops : min= 472, max= 546, avg=511.68, stdev=21.48, samples=19 00:33:09.966 lat (msec) : 20=2.78%, 50=96.17%, 100=1.06% 00:33:09.966 cpu : usr=97.85%, sys=1.06%, ctx=124, majf=0, minf=34 00:33:09.966 IO depths : 1=0.2%, 2=0.4%, 4=6.1%, 8=77.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:33:09.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 complete : 0=0.0%, 4=90.0%, 8=7.7%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.966 issued rwts: total=5114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.966 filename1: (groupid=0, jobs=1): err= 0: pid=2179345: Mon Jun 10 12:11:01 2024 00:33:09.966 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10005msec) 00:33:09.966 slat (nsec): min=5704, max=98382, avg=29793.78, stdev=18316.55 00:33:09.966 clat (usec): min=20582, max=50354, avg=30327.55, stdev=1294.22 00:33:09.966 lat (usec): min=20590, max=50370, avg=30357.35, stdev=1293.62 00:33:09.966 clat percentiles (usec): 00:33:09.966 | 1.00th=[28443], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.966 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:09.966 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:33:09.966 | 99.00th=[34341], 99.50th=[41157], 99.90th=[42206], 99.95th=[50070], 00:33:09.966 | 99.99th=[50594] 00:33:09.966 bw ( KiB/s): min= 2032, max= 2176, per=4.15%, avg=2087.37, stdev=59.79, samples=19 00:33:09.966 iops : min= 508, max= 544, avg=521.68, stdev=14.72, samples=19 00:33:09.966 lat (msec) : 50=99.94%, 100=0.06% 00:33:09.966 cpu : usr=99.09%, sys=0.63%, ctx=14, majf=0, minf=30 00:33:09.966 IO depths : 1=5.6%, 2=11.7%, 4=24.8%, 8=50.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:09.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.967 filename1: (groupid=0, jobs=1): err= 0: pid=2179346: Mon Jun 10 12:11:01 2024 00:33:09.967 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.6MiB/10013msec) 00:33:09.967 slat (usec): min=5, max=106, avg=23.22, stdev=17.57 00:33:09.967 clat (usec): min=12129, max=51330, avg=30134.62, stdev=2319.26 00:33:09.967 lat (usec): min=12136, max=51339, avg=30157.84, stdev=2320.22 00:33:09.967 clat percentiles (usec): 00:33:09.967 | 1.00th=[19530], 5.00th=[28967], 10.00th=[29492], 20.00th=[30016], 00:33:09.967 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:09.967 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31327], 00:33:09.967 | 99.00th=[36439], 99.50th=[41157], 99.90th=[51119], 99.95th=[51119], 00:33:09.967 | 99.99th=[51119] 00:33:09.967 bw ( KiB/s): min= 2043, max= 2320, per=4.19%, avg=2108.68, stdev=80.35, samples=19 00:33:09.967 iops : min= 510, max= 580, avg=527.05, stdev=20.06, samples=19 00:33:09.967 lat (msec) : 20=1.08%, 50=98.73%, 100=0.19% 00:33:09.967 cpu : usr=99.14%, sys=0.53%, ctx=58, majf=0, minf=31 00:33:09.967 IO depths : 1=5.6%, 2=11.3%, 4=23.4%, 8=52.6%, 16=7.1%, 32=0.0%, >=64=0.0% 00:33:09.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 issued rwts: total=5282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.967 filename1: (groupid=0, jobs=1): err= 0: pid=2179347: Mon Jun 10 12:11:01 2024 00:33:09.967 read: IOPS=521, BW=2087KiB/s (2137kB/s)(20.4MiB/10017msec) 00:33:09.967 slat (nsec): min=5502, max=96396, avg=23768.13, stdev=17213.88 00:33:09.967 clat (usec): min=12692, max=51556, avg=30449.01, stdev=2563.90 00:33:09.967 lat (usec): min=12701, max=51564, avg=30472.78, stdev=2563.31 00:33:09.967 clat percentiles (usec): 00:33:09.967 | 1.00th=[24511], 5.00th=[29230], 10.00th=[29754], 20.00th=[30016], 00:33:09.967 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:09.967 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31327], 00:33:09.967 | 99.00th=[44303], 99.50th=[46400], 99.90th=[51643], 99.95th=[51643], 00:33:09.967 | 99.99th=[51643] 00:33:09.967 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2084.84, stdev=75.71, samples=19 00:33:09.967 iops : min= 480, max= 544, avg=521.05, stdev=18.94, samples=19 00:33:09.967 lat (msec) : 20=0.73%, 50=99.16%, 100=0.11% 00:33:09.967 cpu : usr=99.18%, sys=0.52%, ctx=15, majf=0, minf=24 00:33:09.967 IO depths : 1=5.1%, 2=10.3%, 4=23.2%, 8=53.8%, 16=7.6%, 32=0.0%, >=64=0.0% 00:33:09.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 issued rwts: total=5226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.967 filename1: (groupid=0, jobs=1): err= 0: pid=2179348: Mon Jun 10 12:11:01 2024 00:33:09.967 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10008msec) 00:33:09.967 slat (usec): min=5, max=105, avg=26.44, stdev=16.24 00:33:09.967 clat (usec): min=8126, max=50254, avg=30277.32, stdev=1965.16 00:33:09.967 lat (usec): min=8131, max=50270, avg=30303.75, stdev=1965.27 00:33:09.967 clat percentiles (usec): 00:33:09.967 | 1.00th=[27657], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.967 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:09.967 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:33:09.967 | 99.00th=[32375], 99.50th=[40633], 99.90th=[50070], 99.95th=[50070], 00:33:09.967 | 99.99th=[50070] 00:33:09.967 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2087.89, stdev=74.39, samples=19 00:33:09.967 iops : min= 480, max= 544, avg=521.89, stdev=18.58, samples=19 00:33:09.967 lat (msec) : 10=0.30%, 20=0.30%, 50=99.09%, 100=0.30% 00:33:09.967 cpu : usr=99.22%, sys=0.49%, ctx=12, majf=0, minf=32 00:33:09.967 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:09.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.967 filename2: (groupid=0, jobs=1): err= 0: pid=2179349: Mon Jun 10 12:11:01 2024 00:33:09.967 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10005msec) 00:33:09.967 slat (nsec): min=5364, max=97537, avg=16952.17, stdev=14377.11 00:33:09.967 clat (usec): min=8193, max=47835, avg=30488.61, stdev=2635.50 00:33:09.967 lat (usec): min=8199, max=47842, avg=30505.57, stdev=2635.64 00:33:09.967 clat percentiles (usec): 00:33:09.967 | 1.00th=[21103], 5.00th=[29230], 10.00th=[29754], 20.00th=[30016], 00:33:09.967 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.967 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31327], 95.00th=[32113], 00:33:09.967 | 99.00th=[40633], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:33:09.967 | 99.99th=[47973] 00:33:09.967 bw ( KiB/s): min= 1923, max= 2160, per=4.14%, avg=2081.68, stdev=61.30, samples=19 00:33:09.967 iops : min= 480, max= 540, avg=520.26, stdev=15.38, samples=19 00:33:09.967 lat (msec) : 10=0.31%, 20=0.38%, 50=99.31% 00:33:09.967 cpu : usr=99.33%, sys=0.39%, ctx=14, majf=0, minf=45 00:33:09.967 IO depths : 1=0.3%, 2=3.2%, 4=12.2%, 8=69.1%, 16=15.2%, 32=0.0%, >=64=0.0% 00:33:09.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 complete : 0=0.0%, 4=91.7%, 8=5.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.967 filename2: (groupid=0, jobs=1): err= 0: pid=2179350: Mon Jun 10 12:11:01 2024 00:33:09.967 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10007msec) 00:33:09.967 slat (usec): min=5, max=118, avg=27.10, stdev=18.08 00:33:09.967 clat (usec): min=8869, max=56797, avg=30263.26, stdev=2026.05 00:33:09.967 lat (usec): min=8877, max=56812, avg=30290.35, stdev=2025.83 00:33:09.967 clat percentiles (usec): 00:33:09.967 | 1.00th=[26870], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.967 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:09.967 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31327], 00:33:09.967 | 99.00th=[32375], 99.50th=[40633], 99.90th=[48497], 99.95th=[48497], 00:33:09.967 | 99.99th=[56886] 00:33:09.967 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2088.05, stdev=74.01, samples=19 00:33:09.967 iops : min= 480, max= 544, avg=521.89, stdev=18.58, samples=19 00:33:09.967 lat (msec) : 10=0.30%, 20=0.30%, 50=99.35%, 100=0.04% 00:33:09.967 cpu : usr=98.24%, sys=0.99%, ctx=43, majf=0, minf=43 00:33:09.967 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:09.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.967 filename2: (groupid=0, jobs=1): err= 0: pid=2179351: Mon Jun 10 12:11:01 2024 00:33:09.967 read: IOPS=524, BW=2096KiB/s (2147kB/s)(20.5MiB/10014msec) 00:33:09.967 slat (nsec): min=5542, max=50141, avg=11273.54, stdev=6400.83 00:33:09.967 clat (usec): min=16547, max=40155, avg=30414.32, stdev=1116.34 00:33:09.967 lat (usec): min=16559, max=40186, avg=30425.60, stdev=1116.28 00:33:09.967 clat percentiles (usec): 00:33:09.967 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.967 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.967 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.967 | 99.00th=[31851], 99.50th=[35390], 99.90th=[38536], 99.95th=[39060], 00:33:09.967 | 99.99th=[40109] 00:33:09.967 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2094.37, stdev=62.96, samples=19 00:33:09.967 iops : min= 510, max= 544, avg=523.47, stdev=15.68, samples=19 00:33:09.967 lat (msec) : 20=0.30%, 50=99.70% 00:33:09.967 cpu : usr=97.63%, sys=1.42%, ctx=93, majf=0, minf=44 00:33:09.967 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:09.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.967 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.967 filename2: (groupid=0, jobs=1): err= 0: pid=2179352: Mon Jun 10 12:11:01 2024 00:33:09.967 read: IOPS=534, BW=2138KiB/s (2189kB/s)(20.9MiB/10006msec) 00:33:09.967 slat (nsec): min=5444, max=98055, avg=27469.88, stdev=17491.07 00:33:09.967 clat (usec): min=8358, max=48080, avg=29679.07, stdev=3236.06 00:33:09.967 lat (usec): min=8365, max=48097, avg=29706.54, stdev=3240.46 00:33:09.967 clat percentiles (usec): 00:33:09.968 | 1.00th=[17171], 5.00th=[23725], 10.00th=[29230], 20.00th=[29754], 00:33:09.968 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:09.968 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:33:09.968 | 99.00th=[40109], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:33:09.968 | 99.99th=[47973] 00:33:09.968 bw ( KiB/s): min= 1920, max= 2283, per=4.17%, avg=2099.16, stdev=86.80, samples=19 00:33:09.968 iops : min= 480, max= 570, avg=524.63, stdev=21.63, samples=19 00:33:09.968 lat (msec) : 10=0.34%, 20=3.70%, 50=95.96% 00:33:09.968 cpu : usr=97.76%, sys=1.17%, ctx=87, majf=0, minf=31 00:33:09.968 IO depths : 1=5.4%, 2=11.0%, 4=23.0%, 8=53.4%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:09.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 issued rwts: total=5348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.968 filename2: (groupid=0, jobs=1): err= 0: pid=2179353: Mon Jun 10 12:11:01 2024 00:33:09.968 read: IOPS=524, BW=2100KiB/s (2150kB/s)(20.5MiB/10005msec) 00:33:09.968 slat (nsec): min=5504, max=97356, avg=29602.09, stdev=19216.86 00:33:09.968 clat (usec): min=8894, max=47171, avg=30188.25, stdev=2424.65 00:33:09.968 lat (usec): min=8909, max=47191, avg=30217.86, stdev=2425.81 00:33:09.968 clat percentiles (usec): 00:33:09.968 | 1.00th=[20579], 5.00th=[28967], 10.00th=[29492], 20.00th=[30016], 00:33:09.968 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:09.968 | 70.00th=[30278], 80.00th=[30540], 90.00th=[31065], 95.00th=[31327], 00:33:09.968 | 99.00th=[40633], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:33:09.968 | 99.99th=[46924] 00:33:09.968 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2088.37, stdev=73.95, samples=19 00:33:09.968 iops : min= 480, max= 544, avg=521.89, stdev=18.61, samples=19 00:33:09.968 lat (msec) : 10=0.30%, 20=0.61%, 50=99.09% 00:33:09.968 cpu : usr=97.61%, sys=1.29%, ctx=62, majf=0, minf=29 00:33:09.968 IO depths : 1=5.5%, 2=11.4%, 4=23.8%, 8=52.2%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:09.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 issued rwts: total=5252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.968 filename2: (groupid=0, jobs=1): err= 0: pid=2179354: Mon Jun 10 12:11:01 2024 00:33:09.968 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.4MiB/10012msec) 00:33:09.968 slat (nsec): min=5404, max=88848, avg=12743.59, stdev=9511.48 00:33:09.968 clat (usec): min=11507, max=59370, avg=30605.51, stdev=2759.58 00:33:09.968 lat (usec): min=11514, max=59390, avg=30618.25, stdev=2759.63 00:33:09.968 clat percentiles (usec): 00:33:09.968 | 1.00th=[22152], 5.00th=[29230], 10.00th=[29754], 20.00th=[30016], 00:33:09.968 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.968 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31327], 95.00th=[32113], 00:33:09.968 | 99.00th=[41681], 99.50th=[47973], 99.90th=[54264], 99.95th=[54264], 00:33:09.968 | 99.99th=[59507] 00:33:09.968 bw ( KiB/s): min= 1964, max= 2176, per=4.14%, avg=2080.47, stdev=58.96, samples=19 00:33:09.968 iops : min= 491, max= 544, avg=520.00, stdev=14.64, samples=19 00:33:09.968 lat (msec) : 20=0.77%, 50=98.81%, 100=0.42% 00:33:09.968 cpu : usr=99.30%, sys=0.40%, ctx=17, majf=0, minf=42 00:33:09.968 IO depths : 1=1.4%, 2=6.5%, 4=21.3%, 8=58.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:09.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 complete : 0=0.0%, 4=93.5%, 8=1.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 issued rwts: total=5218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.968 filename2: (groupid=0, jobs=1): err= 0: pid=2179355: Mon Jun 10 12:11:01 2024 00:33:09.968 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10005msec) 00:33:09.968 slat (nsec): min=5504, max=60427, avg=12082.63, stdev=8682.61 00:33:09.968 clat (usec): min=22165, max=42029, avg=30472.20, stdev=935.79 00:33:09.968 lat (usec): min=22171, max=42047, avg=30484.29, stdev=935.55 00:33:09.968 clat percentiles (usec): 00:33:09.968 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:09.968 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:09.968 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.968 | 99.00th=[31851], 99.50th=[38536], 99.90th=[42206], 99.95th=[42206], 00:33:09.968 | 99.99th=[42206] 00:33:09.968 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2087.37, stdev=60.15, samples=19 00:33:09.968 iops : min= 510, max= 544, avg=521.68, stdev=14.90, samples=19 00:33:09.968 lat (msec) : 50=100.00% 00:33:09.968 cpu : usr=97.85%, sys=1.08%, ctx=54, majf=0, minf=30 00:33:09.968 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:09.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.968 filename2: (groupid=0, jobs=1): err= 0: pid=2179356: Mon Jun 10 12:11:01 2024 00:33:09.968 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.5MiB/10006msec) 00:33:09.968 slat (nsec): min=5495, max=99057, avg=15698.73, stdev=13617.55 00:33:09.968 clat (usec): min=8474, max=65719, avg=30515.19, stdev=2676.75 00:33:09.968 lat (usec): min=8480, max=65735, avg=30530.89, stdev=2676.71 00:33:09.968 clat percentiles (usec): 00:33:09.968 | 1.00th=[21890], 5.00th=[29492], 10.00th=[29754], 20.00th=[30278], 00:33:09.968 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:33:09.968 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:09.968 | 99.00th=[39060], 99.50th=[47449], 99.90th=[51119], 99.95th=[65799], 00:33:09.968 | 99.99th=[65799] 00:33:09.968 bw ( KiB/s): min= 1904, max= 2144, per=4.14%, avg=2082.32, stdev=52.34, samples=19 00:33:09.968 iops : min= 476, max= 536, avg=520.42, stdev=13.12, samples=19 00:33:09.968 lat (msec) : 10=0.31%, 20=0.46%, 50=98.95%, 100=0.29% 00:33:09.968 cpu : usr=99.09%, sys=0.61%, ctx=16, majf=0, minf=32 00:33:09.968 IO depths : 1=0.1%, 2=0.2%, 4=1.6%, 8=80.0%, 16=18.2%, 32=0.0%, >=64=0.0% 00:33:09.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 complete : 0=0.0%, 4=89.7%, 8=9.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.968 issued rwts: total=5236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:09.968 00:33:09.968 Run status group 0 (all jobs): 00:33:09.968 READ: bw=49.1MiB/s (51.5MB/s), 2032KiB/s-2152KiB/s (2081kB/s-2204kB/s), io=492MiB (516MB), run=10004-10017msec 00:33:09.968 12:11:01 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:09.968 12:11:01 -- target/dif.sh@43 -- # local sub 00:33:09.968 12:11:01 -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.968 12:11:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:09.968 12:11:01 -- target/dif.sh@36 -- # local sub_id=0 00:33:09.968 12:11:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:09.968 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.968 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.968 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.968 12:11:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:09.968 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.968 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.968 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.968 12:11:01 -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.968 12:11:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:09.968 12:11:01 -- target/dif.sh@36 -- # local sub_id=1 00:33:09.968 12:11:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.968 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.968 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.968 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.968 12:11:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:09.968 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.968 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.968 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.968 12:11:01 -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.968 12:11:01 -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:09.968 12:11:01 -- target/dif.sh@36 -- # local sub_id=2 00:33:09.968 12:11:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:09.968 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.968 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.968 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.968 12:11:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:09.969 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.969 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.969 12:11:01 -- target/dif.sh@115 -- # NULL_DIF=1 00:33:09.969 12:11:01 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:09.969 12:11:01 -- target/dif.sh@115 -- # numjobs=2 00:33:09.969 12:11:01 -- target/dif.sh@115 -- # iodepth=8 00:33:09.969 12:11:01 -- target/dif.sh@115 -- # runtime=5 00:33:09.969 12:11:01 -- target/dif.sh@115 -- # files=1 00:33:09.969 12:11:01 -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:09.969 12:11:01 -- target/dif.sh@28 -- # local sub 00:33:09.969 12:11:01 -- target/dif.sh@30 -- # for sub in "$@" 00:33:09.969 12:11:01 -- target/dif.sh@31 -- # create_subsystem 0 00:33:09.969 12:11:01 -- target/dif.sh@18 -- # local sub_id=0 00:33:09.969 12:11:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:09.969 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.969 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 bdev_null0 00:33:09.969 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.969 12:11:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:09.969 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.969 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.969 12:11:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:09.969 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.969 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.969 12:11:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:09.969 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.969 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 [2024-06-10 12:11:01.883828] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.969 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.969 12:11:01 -- target/dif.sh@30 -- # for sub in "$@" 00:33:09.969 12:11:01 -- target/dif.sh@31 -- # create_subsystem 1 00:33:09.969 12:11:01 -- target/dif.sh@18 -- # local sub_id=1 00:33:09.969 12:11:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:09.969 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.969 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 bdev_null1 00:33:09.969 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.969 12:11:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:09.969 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.969 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.969 12:11:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:09.969 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.969 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.969 12:11:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:09.969 12:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.969 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 12:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.969 12:11:01 -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:09.969 12:11:01 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:09.969 12:11:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:09.969 12:11:01 -- nvmf/common.sh@520 -- # config=() 00:33:09.969 12:11:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.969 12:11:01 -- nvmf/common.sh@520 -- # local subsystem config 00:33:09.969 12:11:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:09.969 12:11:01 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.969 12:11:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:09.969 { 00:33:09.969 "params": { 00:33:09.969 "name": "Nvme$subsystem", 00:33:09.969 "trtype": "$TEST_TRANSPORT", 00:33:09.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.969 "adrfam": "ipv4", 00:33:09.969 "trsvcid": "$NVMF_PORT", 00:33:09.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.969 "hdgst": ${hdgst:-false}, 00:33:09.969 "ddgst": ${ddgst:-false} 00:33:09.969 }, 00:33:09.969 "method": "bdev_nvme_attach_controller" 00:33:09.969 } 00:33:09.969 EOF 00:33:09.969 )") 00:33:09.969 12:11:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:09.969 12:11:01 -- target/dif.sh@82 -- # gen_fio_conf 00:33:09.969 12:11:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:09.969 12:11:01 -- target/dif.sh@54 -- # local file 00:33:09.969 12:11:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:09.969 12:11:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.969 12:11:01 -- target/dif.sh@56 -- # cat 00:33:09.969 12:11:01 -- common/autotest_common.sh@1320 -- # shift 00:33:09.969 12:11:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:09.969 12:11:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.969 12:11:01 -- nvmf/common.sh@542 -- # cat 00:33:09.969 12:11:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.969 12:11:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:09.969 12:11:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:09.969 12:11:01 -- target/dif.sh@72 -- # (( file <= files )) 00:33:09.969 12:11:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:09.969 12:11:01 -- target/dif.sh@73 -- # cat 00:33:09.969 12:11:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:09.969 12:11:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:09.969 { 00:33:09.969 "params": { 00:33:09.969 "name": "Nvme$subsystem", 00:33:09.969 "trtype": "$TEST_TRANSPORT", 00:33:09.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.969 "adrfam": "ipv4", 00:33:09.969 "trsvcid": "$NVMF_PORT", 00:33:09.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.969 "hdgst": ${hdgst:-false}, 00:33:09.969 "ddgst": ${ddgst:-false} 00:33:09.969 }, 00:33:09.969 "method": "bdev_nvme_attach_controller" 00:33:09.969 } 00:33:09.969 EOF 00:33:09.969 )") 00:33:09.969 12:11:01 -- target/dif.sh@72 -- # (( file++ )) 00:33:09.969 12:11:01 -- nvmf/common.sh@542 -- # cat 00:33:09.969 12:11:01 -- target/dif.sh@72 -- # (( file <= files )) 00:33:09.969 12:11:01 -- nvmf/common.sh@544 -- # jq . 00:33:09.969 12:11:01 -- nvmf/common.sh@545 -- # IFS=, 00:33:09.969 12:11:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:09.969 "params": { 00:33:09.969 "name": "Nvme0", 00:33:09.969 "trtype": "tcp", 00:33:09.969 "traddr": "10.0.0.2", 00:33:09.969 "adrfam": "ipv4", 00:33:09.969 "trsvcid": "4420", 00:33:09.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.969 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:09.970 "hdgst": false, 00:33:09.970 "ddgst": false 00:33:09.970 }, 00:33:09.970 "method": "bdev_nvme_attach_controller" 00:33:09.970 },{ 00:33:09.970 "params": { 00:33:09.970 "name": "Nvme1", 00:33:09.970 "trtype": "tcp", 00:33:09.970 "traddr": "10.0.0.2", 00:33:09.970 "adrfam": "ipv4", 00:33:09.970 "trsvcid": "4420", 00:33:09.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:09.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:09.970 "hdgst": false, 00:33:09.970 "ddgst": false 00:33:09.970 }, 00:33:09.970 "method": "bdev_nvme_attach_controller" 00:33:09.970 }' 00:33:09.970 12:11:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:09.970 12:11:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:09.970 12:11:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.970 12:11:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.970 12:11:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:09.970 12:11:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:09.970 12:11:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:09.970 12:11:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:09.970 12:11:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:09.970 12:11:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.970 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:09.970 ... 00:33:09.970 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:09.970 ... 00:33:09.970 fio-3.35 00:33:09.970 Starting 4 threads 00:33:09.970 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.970 [2024-06-10 12:11:02.830019] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:09.970 [2024-06-10 12:11:02.830057] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:15.259 00:33:15.259 filename0: (groupid=0, jobs=1): err= 0: pid=2181702: Mon Jun 10 12:11:08 2024 00:33:15.259 read: IOPS=2067, BW=16.1MiB/s (16.9MB/s)(80.8MiB/5002msec) 00:33:15.259 slat (nsec): min=5334, max=31891, avg=6024.65, stdev=1717.78 00:33:15.259 clat (usec): min=1578, max=7036, avg=3853.11, stdev=641.62 00:33:15.259 lat (usec): min=1601, max=7041, avg=3859.14, stdev=641.61 00:33:15.259 clat percentiles (usec): 00:33:15.259 | 1.00th=[ 2573], 5.00th=[ 3032], 10.00th=[ 3261], 20.00th=[ 3490], 00:33:15.259 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:33:15.259 | 70.00th=[ 3851], 80.00th=[ 4080], 90.00th=[ 4686], 95.00th=[ 5276], 00:33:15.259 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 6390], 99.95th=[ 6652], 00:33:15.259 | 99.99th=[ 7046] 00:33:15.259 bw ( KiB/s): min=16224, max=17264, per=25.03%, avg=16590.22, stdev=331.63, samples=9 00:33:15.259 iops : min= 2028, max= 2158, avg=2073.78, stdev=41.45, samples=9 00:33:15.259 lat (msec) : 2=0.11%, 4=77.67%, 10=22.22% 00:33:15.259 cpu : usr=96.78%, sys=2.70%, ctx=139, majf=0, minf=1 00:33:15.259 IO depths : 1=0.2%, 2=1.2%, 4=70.1%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.259 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.259 issued rwts: total=10340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.259 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.259 filename0: (groupid=0, jobs=1): err= 0: pid=2181704: Mon Jun 10 12:11:08 2024 00:33:15.259 read: IOPS=1964, BW=15.3MiB/s (16.1MB/s)(76.8MiB/5002msec) 00:33:15.259 slat (nsec): min=5329, max=33119, avg=5845.51, stdev=1418.78 00:33:15.259 clat (usec): min=1800, max=7034, avg=4055.98, stdev=727.44 00:33:15.259 lat (usec): min=1806, max=7040, avg=4061.83, stdev=727.38 00:33:15.259 clat percentiles (usec): 00:33:15.259 | 1.00th=[ 2900], 5.00th=[ 3294], 10.00th=[ 3458], 20.00th=[ 3556], 00:33:15.259 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3851], 00:33:15.259 | 70.00th=[ 4080], 80.00th=[ 4555], 90.00th=[ 5342], 95.00th=[ 5669], 00:33:15.259 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6849], 99.95th=[ 6915], 00:33:15.259 | 99.99th=[ 7046] 00:33:15.259 bw ( KiB/s): min=15056, max=15984, per=23.63%, avg=15660.44, stdev=274.78, samples=9 00:33:15.259 iops : min= 1882, max= 1998, avg=1957.56, stdev=34.35, samples=9 00:33:15.259 lat (msec) : 2=0.05%, 4=67.60%, 10=32.35% 00:33:15.259 cpu : usr=97.70%, sys=2.06%, ctx=15, majf=0, minf=9 00:33:15.259 IO depths : 1=0.3%, 2=1.0%, 4=71.8%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.259 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.259 issued rwts: total=9825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.259 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.259 filename1: (groupid=0, jobs=1): err= 0: pid=2181705: Mon Jun 10 12:11:08 2024 00:33:15.259 read: IOPS=2299, BW=18.0MiB/s (18.8MB/s)(90.6MiB/5042msec) 00:33:15.259 slat (nsec): min=5326, max=31539, avg=5793.06, stdev=1199.11 00:33:15.259 clat (usec): min=836, max=42867, avg=3451.66, stdev=976.00 00:33:15.259 lat (usec): min=842, max=42872, avg=3457.45, stdev=975.96 00:33:15.259 clat percentiles (usec): 00:33:15.259 | 1.00th=[ 2114], 5.00th=[ 2573], 10.00th=[ 2769], 20.00th=[ 2933], 00:33:15.259 | 30.00th=[ 3130], 40.00th=[ 3326], 50.00th=[ 3556], 60.00th=[ 3589], 00:33:15.259 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3851], 95.00th=[ 4424], 00:33:15.259 | 99.00th=[ 5080], 99.50th=[ 5407], 99.90th=[ 5932], 99.95th=[ 6128], 00:33:15.259 | 99.99th=[42730] 00:33:15.259 bw ( KiB/s): min=17120, max=19920, per=28.18%, avg=18675.56, stdev=787.66, samples=9 00:33:15.259 iops : min= 2140, max= 2490, avg=2334.44, stdev=98.46, samples=9 00:33:15.259 lat (usec) : 1000=0.02% 00:33:15.259 lat (msec) : 2=0.66%, 4=91.19%, 10=8.10%, 50=0.04% 00:33:15.259 cpu : usr=97.32%, sys=2.32%, ctx=15, majf=0, minf=9 00:33:15.259 IO depths : 1=0.1%, 2=5.9%, 4=64.4%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.259 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.259 issued rwts: total=11594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.259 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.259 filename1: (groupid=0, jobs=1): err= 0: pid=2181706: Mon Jun 10 12:11:08 2024 00:33:15.259 read: IOPS=2001, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5003msec) 00:33:15.260 slat (nsec): min=5333, max=33149, avg=5891.53, stdev=1462.42 00:33:15.260 clat (usec): min=2253, max=45670, avg=3979.78, stdev=1342.32 00:33:15.260 lat (usec): min=2258, max=45703, avg=3985.67, stdev=1342.51 00:33:15.260 clat percentiles (usec): 00:33:15.260 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3392], 20.00th=[ 3556], 00:33:15.260 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3818], 00:33:15.260 | 70.00th=[ 3916], 80.00th=[ 4228], 90.00th=[ 5014], 95.00th=[ 5473], 00:33:15.260 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 6849], 99.95th=[45351], 00:33:15.260 | 99.99th=[45876] 00:33:15.260 bw ( KiB/s): min=14941, max=16608, per=24.00%, avg=15905.44, stdev=468.10, samples=9 00:33:15.260 iops : min= 1867, max= 2076, avg=1988.11, stdev=58.67, samples=9 00:33:15.260 lat (msec) : 4=74.19%, 10=25.73%, 50=0.08% 00:33:15.260 cpu : usr=97.24%, sys=2.50%, ctx=11, majf=0, minf=9 00:33:15.260 IO depths : 1=0.2%, 2=1.0%, 4=71.6%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.260 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.260 issued rwts: total=10013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.260 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.260 00:33:15.260 Run status group 0 (all jobs): 00:33:15.260 READ: bw=64.7MiB/s (67.9MB/s), 15.3MiB/s-18.0MiB/s (16.1MB/s-18.8MB/s), io=326MiB (342MB), run=5002-5042msec 00:33:15.260 12:11:08 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:15.260 12:11:08 -- target/dif.sh@43 -- # local sub 00:33:15.260 12:11:08 -- target/dif.sh@45 -- # for sub in "$@" 00:33:15.260 12:11:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:15.260 12:11:08 -- target/dif.sh@36 -- # local sub_id=0 00:33:15.260 12:11:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:15.260 12:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 12:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.260 12:11:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:15.260 12:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 12:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.260 12:11:08 -- target/dif.sh@45 -- # for sub in "$@" 00:33:15.260 12:11:08 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:15.260 12:11:08 -- target/dif.sh@36 -- # local sub_id=1 00:33:15.260 12:11:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:15.260 12:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 12:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.260 12:11:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:15.260 12:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 12:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.260 00:33:15.260 real 0m24.432s 00:33:15.260 user 5m14.615s 00:33:15.260 sys 0m3.908s 00:33:15.260 12:11:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 ************************************ 00:33:15.260 END TEST fio_dif_rand_params 00:33:15.260 ************************************ 00:33:15.260 12:11:08 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:15.260 12:11:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:15.260 12:11:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 ************************************ 00:33:15.260 START TEST fio_dif_digest 00:33:15.260 ************************************ 00:33:15.260 12:11:08 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:33:15.260 12:11:08 -- target/dif.sh@123 -- # local NULL_DIF 00:33:15.260 12:11:08 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:15.260 12:11:08 -- target/dif.sh@125 -- # local hdgst ddgst 00:33:15.260 12:11:08 -- target/dif.sh@127 -- # NULL_DIF=3 00:33:15.260 12:11:08 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:15.260 12:11:08 -- target/dif.sh@127 -- # numjobs=3 00:33:15.260 12:11:08 -- target/dif.sh@127 -- # iodepth=3 00:33:15.260 12:11:08 -- target/dif.sh@127 -- # runtime=10 00:33:15.260 12:11:08 -- target/dif.sh@128 -- # hdgst=true 00:33:15.260 12:11:08 -- target/dif.sh@128 -- # ddgst=true 00:33:15.260 12:11:08 -- target/dif.sh@130 -- # create_subsystems 0 00:33:15.260 12:11:08 -- target/dif.sh@28 -- # local sub 00:33:15.260 12:11:08 -- target/dif.sh@30 -- # for sub in "$@" 00:33:15.260 12:11:08 -- target/dif.sh@31 -- # create_subsystem 0 00:33:15.260 12:11:08 -- target/dif.sh@18 -- # local sub_id=0 00:33:15.260 12:11:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:15.260 12:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 bdev_null0 00:33:15.260 12:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.260 12:11:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:15.260 12:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 12:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.260 12:11:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:15.260 12:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 12:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.260 12:11:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:15.260 12:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.260 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 [2024-06-10 12:11:08.254235] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.260 12:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.260 12:11:08 -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:15.260 12:11:08 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:15.260 12:11:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:15.260 12:11:08 -- nvmf/common.sh@520 -- # config=() 00:33:15.260 12:11:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.260 12:11:08 -- nvmf/common.sh@520 -- # local subsystem config 00:33:15.260 12:11:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:15.260 12:11:08 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.260 12:11:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:15.260 { 00:33:15.260 "params": { 00:33:15.260 "name": "Nvme$subsystem", 00:33:15.260 "trtype": "$TEST_TRANSPORT", 00:33:15.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.260 "adrfam": "ipv4", 00:33:15.260 "trsvcid": "$NVMF_PORT", 00:33:15.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.260 "hdgst": ${hdgst:-false}, 00:33:15.260 "ddgst": ${ddgst:-false} 00:33:15.260 }, 00:33:15.260 "method": "bdev_nvme_attach_controller" 00:33:15.260 } 00:33:15.260 EOF 00:33:15.260 )") 00:33:15.260 12:11:08 -- target/dif.sh@82 -- # gen_fio_conf 00:33:15.260 12:11:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:15.260 12:11:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:15.260 12:11:08 -- target/dif.sh@54 -- # local file 00:33:15.260 12:11:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:15.260 12:11:08 -- target/dif.sh@56 -- # cat 00:33:15.260 12:11:08 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.260 12:11:08 -- nvmf/common.sh@542 -- # cat 00:33:15.260 12:11:08 -- common/autotest_common.sh@1320 -- # shift 00:33:15.260 12:11:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:15.260 12:11:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.260 12:11:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.260 12:11:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:15.260 12:11:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:15.260 12:11:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:15.260 12:11:08 -- target/dif.sh@72 -- # (( file <= files )) 00:33:15.260 12:11:08 -- nvmf/common.sh@544 -- # jq . 00:33:15.260 12:11:08 -- nvmf/common.sh@545 -- # IFS=, 00:33:15.260 12:11:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:15.260 "params": { 00:33:15.260 "name": "Nvme0", 00:33:15.260 "trtype": "tcp", 00:33:15.260 "traddr": "10.0.0.2", 00:33:15.260 "adrfam": "ipv4", 00:33:15.260 "trsvcid": "4420", 00:33:15.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.260 "hdgst": true, 00:33:15.260 "ddgst": true 00:33:15.260 }, 00:33:15.260 "method": "bdev_nvme_attach_controller" 00:33:15.260 }' 00:33:15.260 12:11:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:15.260 12:11:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:15.260 12:11:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.261 12:11:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:15.261 12:11:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.261 12:11:08 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:15.261 12:11:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:15.261 12:11:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:15.261 12:11:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:15.261 12:11:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.261 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:15.261 ... 00:33:15.261 fio-3.35 00:33:15.261 Starting 3 threads 00:33:15.261 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.520 [2024-06-10 12:11:09.048748] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:15.521 [2024-06-10 12:11:09.048786] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:25.516 00:33:25.516 filename0: (groupid=0, jobs=1): err= 0: pid=2183089: Mon Jun 10 12:11:19 2024 00:33:25.516 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(269MiB/10049msec) 00:33:25.516 slat (nsec): min=5570, max=30737, avg=6513.75, stdev=1004.44 00:33:25.516 clat (usec): min=7943, max=94592, avg=13972.98, stdev=7634.28 00:33:25.516 lat (usec): min=7949, max=94599, avg=13979.49, stdev=7634.28 00:33:25.516 clat percentiles (usec): 00:33:25.516 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11207], 00:33:25.516 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13042], 60.00th=[13435], 00:33:25.516 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15008], 95.00th=[15795], 00:33:25.516 | 99.00th=[54264], 99.50th=[55313], 99.90th=[93848], 99.95th=[93848], 00:33:25.516 | 99.99th=[94897] 00:33:25.516 bw ( KiB/s): min=23296, max=32768, per=35.74%, avg=27525.80, stdev=2529.49, samples=20 00:33:25.516 iops : min= 182, max= 256, avg=215.00, stdev=19.73, samples=20 00:33:25.516 lat (msec) : 10=7.62%, 20=89.55%, 50=0.05%, 100=2.79% 00:33:25.516 cpu : usr=95.82%, sys=3.71%, ctx=443, majf=0, minf=124 00:33:25.516 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.516 issued rwts: total=2153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.516 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:25.516 filename0: (groupid=0, jobs=1): err= 0: pid=2183090: Mon Jun 10 12:11:19 2024 00:33:25.516 read: IOPS=159, BW=19.9MiB/s (20.9MB/s)(200MiB/10044msec) 00:33:25.516 slat (nsec): min=5667, max=30230, avg=6637.28, stdev=1122.55 00:33:25.516 clat (msec): min=8, max=136, avg=18.81, stdev=13.34 00:33:25.516 lat (msec): min=8, max=136, avg=18.82, stdev=13.34 00:33:25.516 clat percentiles (msec): 00:33:25.516 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:33:25.516 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 16], 00:33:25.516 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 19], 95.00th=[ 56], 00:33:25.516 | 99.00th=[ 58], 99.50th=[ 94], 99.90th=[ 100], 99.95th=[ 138], 00:33:25.516 | 99.99th=[ 138] 00:33:25.516 bw ( KiB/s): min=13056, max=26880, per=26.54%, avg=20440.10, stdev=3696.71, samples=20 00:33:25.516 iops : min= 102, max= 210, avg=159.65, stdev=28.95, samples=20 00:33:25.516 lat (msec) : 10=0.56%, 20=89.81%, 50=0.25%, 100=9.32%, 250=0.06% 00:33:25.516 cpu : usr=95.63%, sys=3.80%, ctx=660, majf=0, minf=129 00:33:25.516 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.516 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.516 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:25.516 filename0: (groupid=0, jobs=1): err= 0: pid=2183091: Mon Jun 10 12:11:19 2024 00:33:25.516 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(287MiB/10049msec) 00:33:25.516 slat (nsec): min=5670, max=31634, avg=6413.68, stdev=875.80 00:33:25.516 clat (usec): min=6665, max=56521, avg=13112.95, stdev=3777.38 00:33:25.516 lat (usec): min=6671, max=56528, avg=13119.36, stdev=3777.41 00:33:25.516 clat percentiles (usec): 00:33:25.516 | 1.00th=[ 8029], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10945], 00:33:25.516 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13173], 60.00th=[13698], 00:33:25.516 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15270], 95.00th=[15795], 00:33:25.516 | 99.00th=[16909], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:33:25.516 | 99.99th=[56361] 00:33:25.516 bw ( KiB/s): min=24576, max=31744, per=38.09%, avg=29337.60, stdev=1999.25, samples=20 00:33:25.516 iops : min= 192, max= 248, avg=229.20, stdev=15.62, samples=20 00:33:25.516 lat (msec) : 10=8.11%, 20=91.28%, 50=0.04%, 100=0.57% 00:33:25.516 cpu : usr=95.89%, sys=3.87%, ctx=14, majf=0, minf=159 00:33:25.516 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.516 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.516 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:25.516 00:33:25.516 Run status group 0 (all jobs): 00:33:25.516 READ: bw=75.2MiB/s (78.9MB/s), 19.9MiB/s-28.5MiB/s (20.9MB/s-29.9MB/s), io=756MiB (792MB), run=10044-10049msec 00:33:25.777 12:11:19 -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:25.777 12:11:19 -- target/dif.sh@43 -- # local sub 00:33:25.777 12:11:19 -- target/dif.sh@45 -- # for sub in "$@" 00:33:25.777 12:11:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:25.777 12:11:19 -- target/dif.sh@36 -- # local sub_id=0 00:33:25.777 12:11:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:25.777 12:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.777 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:33:25.777 12:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.777 12:11:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:25.777 12:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.777 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:33:25.777 12:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.777 00:33:25.777 real 0m11.161s 00:33:25.777 user 0m41.773s 00:33:25.777 sys 0m1.437s 00:33:25.777 12:11:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:25.777 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:33:25.777 ************************************ 00:33:25.777 END TEST fio_dif_digest 00:33:25.777 ************************************ 00:33:25.777 12:11:19 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:25.777 12:11:19 -- target/dif.sh@147 -- # nvmftestfini 00:33:25.777 12:11:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:25.777 12:11:19 -- nvmf/common.sh@116 -- # sync 00:33:25.777 12:11:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:25.777 12:11:19 -- nvmf/common.sh@119 -- # set +e 00:33:25.777 12:11:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:25.777 12:11:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:25.777 rmmod nvme_tcp 00:33:25.777 rmmod nvme_fabrics 00:33:25.777 rmmod nvme_keyring 00:33:25.777 12:11:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:25.777 12:11:19 -- nvmf/common.sh@123 -- # set -e 00:33:25.777 12:11:19 -- nvmf/common.sh@124 -- # return 0 00:33:25.777 12:11:19 -- nvmf/common.sh@477 -- # '[' -n 2172538 ']' 00:33:25.777 12:11:19 -- nvmf/common.sh@478 -- # killprocess 2172538 00:33:25.777 12:11:19 -- common/autotest_common.sh@926 -- # '[' -z 2172538 ']' 00:33:25.777 12:11:19 -- common/autotest_common.sh@930 -- # kill -0 2172538 00:33:25.777 12:11:19 -- common/autotest_common.sh@931 -- # uname 00:33:25.777 12:11:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:25.777 12:11:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2172538 00:33:25.777 12:11:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:25.777 12:11:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:25.777 12:11:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2172538' 00:33:25.777 killing process with pid 2172538 00:33:25.777 12:11:19 -- common/autotest_common.sh@945 -- # kill 2172538 00:33:25.777 12:11:19 -- common/autotest_common.sh@950 -- # wait 2172538 00:33:26.038 12:11:19 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:33:26.038 12:11:19 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:29.337 Waiting for block devices as requested 00:33:29.337 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:29.337 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:29.337 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:29.338 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:29.338 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:29.599 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:29.599 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:29.599 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:29.859 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:29.859 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:29.859 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:30.119 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:30.119 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:30.119 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:30.119 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:30.380 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:30.380 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:30.380 12:11:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:30.380 12:11:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:30.380 12:11:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:30.380 12:11:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:30.380 12:11:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.380 12:11:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:30.380 12:11:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.362 12:11:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:32.362 00:33:32.362 real 1m16.213s 00:33:32.362 user 8m1.136s 00:33:32.362 sys 0m18.661s 00:33:32.362 12:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:32.362 12:11:26 -- common/autotest_common.sh@10 -- # set +x 00:33:32.362 ************************************ 00:33:32.362 END TEST nvmf_dif 00:33:32.362 ************************************ 00:33:32.362 12:11:26 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:32.362 12:11:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:32.362 12:11:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:32.362 12:11:26 -- common/autotest_common.sh@10 -- # set +x 00:33:32.622 ************************************ 00:33:32.622 START TEST nvmf_abort_qd_sizes 00:33:32.622 ************************************ 00:33:32.622 12:11:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:32.622 * Looking for test storage... 00:33:32.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:32.622 12:11:26 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.622 12:11:26 -- nvmf/common.sh@7 -- # uname -s 00:33:32.622 12:11:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.622 12:11:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.622 12:11:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.622 12:11:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.622 12:11:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.622 12:11:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.622 12:11:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.622 12:11:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.622 12:11:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.622 12:11:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.622 12:11:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:32.622 12:11:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:32.622 12:11:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.622 12:11:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.622 12:11:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.622 12:11:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.622 12:11:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.622 12:11:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.622 12:11:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.622 12:11:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.622 12:11:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.622 12:11:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.623 12:11:26 -- paths/export.sh@5 -- # export PATH 00:33:32.623 12:11:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.623 12:11:26 -- nvmf/common.sh@46 -- # : 0 00:33:32.623 12:11:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:32.623 12:11:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:32.623 12:11:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:32.623 12:11:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.623 12:11:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.623 12:11:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:32.623 12:11:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:32.623 12:11:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:32.623 12:11:26 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:33:32.623 12:11:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:32.623 12:11:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.623 12:11:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:32.623 12:11:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:32.623 12:11:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:32.623 12:11:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.623 12:11:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:32.623 12:11:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.623 12:11:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:32.623 12:11:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:32.623 12:11:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:32.623 12:11:26 -- common/autotest_common.sh@10 -- # set +x 00:33:40.768 12:11:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:40.768 12:11:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:40.768 12:11:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:40.768 12:11:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:40.768 12:11:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:40.768 12:11:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:40.768 12:11:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:40.768 12:11:33 -- nvmf/common.sh@294 -- # net_devs=() 00:33:40.768 12:11:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:40.768 12:11:33 -- nvmf/common.sh@295 -- # e810=() 00:33:40.768 12:11:33 -- nvmf/common.sh@295 -- # local -ga e810 00:33:40.768 12:11:33 -- nvmf/common.sh@296 -- # x722=() 00:33:40.768 12:11:33 -- nvmf/common.sh@296 -- # local -ga x722 00:33:40.768 12:11:33 -- nvmf/common.sh@297 -- # mlx=() 00:33:40.768 12:11:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:40.768 12:11:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:40.768 12:11:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:40.768 12:11:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:40.768 12:11:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:40.768 12:11:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:40.768 12:11:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:40.768 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:40.768 12:11:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:40.768 12:11:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:40.768 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:40.768 12:11:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:40.768 12:11:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:40.768 12:11:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.768 12:11:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:40.768 12:11:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.768 12:11:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:40.768 Found net devices under 0000:31:00.0: cvl_0_0 00:33:40.768 12:11:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.768 12:11:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:40.768 12:11:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.768 12:11:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:40.768 12:11:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.768 12:11:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:40.768 Found net devices under 0000:31:00.1: cvl_0_1 00:33:40.768 12:11:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.768 12:11:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:40.768 12:11:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:40.768 12:11:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:40.768 12:11:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:40.768 12:11:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:40.768 12:11:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:40.768 12:11:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:40.769 12:11:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:40.769 12:11:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:40.769 12:11:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:40.769 12:11:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:40.769 12:11:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:40.769 12:11:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:40.769 12:11:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:40.769 12:11:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:40.769 12:11:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:40.769 12:11:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:40.769 12:11:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:40.769 12:11:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:40.769 12:11:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:40.769 12:11:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:40.769 12:11:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:40.769 12:11:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:40.769 12:11:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:40.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:40.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:33:40.769 00:33:40.769 --- 10.0.0.2 ping statistics --- 00:33:40.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.769 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:33:40.769 12:11:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:40.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:40.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:33:40.769 00:33:40.769 --- 10.0.0.1 ping statistics --- 00:33:40.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.769 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:33:40.769 12:11:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:40.769 12:11:33 -- nvmf/common.sh@410 -- # return 0 00:33:40.769 12:11:33 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:40.769 12:11:33 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:43.319 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:43.319 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:43.579 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:43.579 12:11:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.579 12:11:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:43.579 12:11:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:43.579 12:11:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.579 12:11:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:43.579 12:11:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:43.579 12:11:37 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:33:43.579 12:11:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:43.579 12:11:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:43.579 12:11:37 -- common/autotest_common.sh@10 -- # set +x 00:33:43.579 12:11:37 -- nvmf/common.sh@469 -- # nvmfpid=2192724 00:33:43.579 12:11:37 -- nvmf/common.sh@470 -- # waitforlisten 2192724 00:33:43.579 12:11:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:43.579 12:11:37 -- common/autotest_common.sh@819 -- # '[' -z 2192724 ']' 00:33:43.579 12:11:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.579 12:11:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:43.579 12:11:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.579 12:11:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:43.579 12:11:37 -- common/autotest_common.sh@10 -- # set +x 00:33:43.841 [2024-06-10 12:11:37.368940] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:43.841 [2024-06-10 12:11:37.368982] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.841 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.841 [2024-06-10 12:11:37.436457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:43.841 [2024-06-10 12:11:37.500903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:43.841 [2024-06-10 12:11:37.501041] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.841 [2024-06-10 12:11:37.501052] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.841 [2024-06-10 12:11:37.501060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.841 [2024-06-10 12:11:37.501197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.841 [2024-06-10 12:11:37.501331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.841 [2024-06-10 12:11:37.501623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:43.841 [2024-06-10 12:11:37.501624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.414 12:11:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:44.414 12:11:38 -- common/autotest_common.sh@852 -- # return 0 00:33:44.414 12:11:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:44.414 12:11:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:44.414 12:11:38 -- common/autotest_common.sh@10 -- # set +x 00:33:44.414 12:11:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:44.414 12:11:38 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:44.414 12:11:38 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:33:44.414 12:11:38 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:33:44.414 12:11:38 -- scripts/common.sh@311 -- # local bdf bdfs 00:33:44.414 12:11:38 -- scripts/common.sh@312 -- # local nvmes 00:33:44.414 12:11:38 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:33:44.676 12:11:38 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:44.676 12:11:38 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:33:44.676 12:11:38 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:33:44.676 12:11:38 -- scripts/common.sh@322 -- # uname -s 00:33:44.676 12:11:38 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:33:44.676 12:11:38 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:33:44.676 12:11:38 -- scripts/common.sh@327 -- # (( 1 )) 00:33:44.676 12:11:38 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:33:44.676 12:11:38 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:33:44.676 12:11:38 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:33:44.676 12:11:38 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:33:44.676 12:11:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:44.676 12:11:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:44.677 12:11:38 -- common/autotest_common.sh@10 -- # set +x 00:33:44.677 ************************************ 00:33:44.677 START TEST spdk_target_abort 00:33:44.677 ************************************ 00:33:44.677 12:11:38 -- common/autotest_common.sh@1104 -- # spdk_target 00:33:44.677 12:11:38 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:44.677 12:11:38 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:44.677 12:11:38 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:33:44.677 12:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.677 12:11:38 -- common/autotest_common.sh@10 -- # set +x 00:33:44.938 spdk_targetn1 00:33:44.938 12:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:44.938 12:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.938 12:11:38 -- common/autotest_common.sh@10 -- # set +x 00:33:44.938 [2024-06-10 12:11:38.516152] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.938 12:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:33:44.938 12:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.938 12:11:38 -- common/autotest_common.sh@10 -- # set +x 00:33:44.938 12:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:33:44.938 12:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.938 12:11:38 -- common/autotest_common.sh@10 -- # set +x 00:33:44.938 12:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:33:44.938 12:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.938 12:11:38 -- common/autotest_common.sh@10 -- # set +x 00:33:44.938 [2024-06-10 12:11:38.556407] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:44.938 12:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:44.938 12:11:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:44.938 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.200 [2024-06-10 12:11:38.770823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2008 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:45.200 [2024-06-10 12:11:38.770847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00fc p:1 m:0 dnr:0 00:33:45.200 [2024-06-10 12:11:38.798189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3384 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:33:45.200 [2024-06-10 12:11:38.798208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00a9 p:0 m:0 dnr:0 00:33:48.515 Initializing NVMe Controllers 00:33:48.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:48.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:48.515 Initialization complete. Launching workers. 00:33:48.515 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 13324, failed: 2 00:33:48.515 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 3165, failed to submit 10161 00:33:48.515 success 735, unsuccess 2430, failed 0 00:33:48.515 12:11:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:48.515 12:11:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:48.515 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.515 [2024-06-10 12:11:41.919545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:488 len:8 PRP1 0x200007c50000 PRP2 0x0 00:33:48.515 [2024-06-10 12:11:41.919581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:33:48.515 [2024-06-10 12:11:41.927402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:680 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:33:48.515 [2024-06-10 12:11:41.927435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:33:48.515 [2024-06-10 12:11:41.989483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1960 len:8 PRP1 0x200007c56000 PRP2 0x0 00:33:48.515 [2024-06-10 12:11:41.989509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0000 p:1 m:0 dnr:0 00:33:48.515 [2024-06-10 12:11:41.997400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:2128 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:33:48.515 [2024-06-10 12:11:41.997423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:48.515 [2024-06-10 12:11:42.029362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:2848 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:33:48.515 [2024-06-10 12:11:42.029386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:48.515 [2024-06-10 12:11:42.037346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:3104 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:33:48.515 [2024-06-10 12:11:42.037369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:0086 p:0 m:0 dnr:0 00:33:48.515 [2024-06-10 12:11:42.053278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:3448 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:33:48.515 [2024-06-10 12:11:42.053301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00b8 p:0 m:0 dnr:0 00:33:51.818 Initializing NVMe Controllers 00:33:51.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:51.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:51.819 Initialization complete. Launching workers. 00:33:51.819 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8713, failed: 7 00:33:51.819 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1215, failed to submit 7505 00:33:51.819 success 367, unsuccess 848, failed 0 00:33:51.819 12:11:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:51.819 12:11:45 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:51.819 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.819 [2024-06-10 12:11:45.294053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:176 nsid:1 lba:680 len:8 PRP1 0x200007914000 PRP2 0x0 00:33:51.819 [2024-06-10 12:11:45.294095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:176 cdw0:0 sqhd:0071 p:1 m:0 dnr:0 00:33:51.819 [2024-06-10 12:11:45.317752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:181 nsid:1 lba:3136 len:8 PRP1 0x200007910000 PRP2 0x0 00:33:51.819 [2024-06-10 12:11:45.317772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:181 cdw0:0 sqhd:00a2 p:0 m:0 dnr:0 00:33:51.819 [2024-06-10 12:11:45.325831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:156 nsid:1 lba:3960 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:33:51.819 [2024-06-10 12:11:45.325849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:156 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.119 Initializing NVMe Controllers 00:33:55.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:55.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:55.119 Initialization complete. Launching workers. 00:33:55.119 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 40252, failed: 3 00:33:55.119 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2645, failed to submit 37610 00:33:55.119 success 629, unsuccess 2016, failed 0 00:33:55.119 12:11:48 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:33:55.119 12:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.119 12:11:48 -- common/autotest_common.sh@10 -- # set +x 00:33:55.119 12:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.119 12:11:48 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:55.119 12:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.119 12:11:48 -- common/autotest_common.sh@10 -- # set +x 00:33:56.504 12:11:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.504 12:11:50 -- target/abort_qd_sizes.sh@62 -- # killprocess 2192724 00:33:56.504 12:11:50 -- common/autotest_common.sh@926 -- # '[' -z 2192724 ']' 00:33:56.504 12:11:50 -- common/autotest_common.sh@930 -- # kill -0 2192724 00:33:56.504 12:11:50 -- common/autotest_common.sh@931 -- # uname 00:33:56.504 12:11:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:56.504 12:11:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2192724 00:33:56.504 12:11:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:56.504 12:11:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:56.504 12:11:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2192724' 00:33:56.504 killing process with pid 2192724 00:33:56.504 12:11:50 -- common/autotest_common.sh@945 -- # kill 2192724 00:33:56.504 12:11:50 -- common/autotest_common.sh@950 -- # wait 2192724 00:33:56.765 00:33:56.765 real 0m12.147s 00:33:56.765 user 0m48.839s 00:33:56.765 sys 0m2.102s 00:33:56.765 12:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:56.765 12:11:50 -- common/autotest_common.sh@10 -- # set +x 00:33:56.765 ************************************ 00:33:56.765 END TEST spdk_target_abort 00:33:56.765 ************************************ 00:33:56.765 12:11:50 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:33:56.765 12:11:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:56.765 12:11:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:56.765 12:11:50 -- common/autotest_common.sh@10 -- # set +x 00:33:56.765 ************************************ 00:33:56.765 START TEST kernel_target_abort 00:33:56.765 ************************************ 00:33:56.765 12:11:50 -- common/autotest_common.sh@1104 -- # kernel_target 00:33:56.765 12:11:50 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:33:56.765 12:11:50 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:33:56.765 12:11:50 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:33:56.765 12:11:50 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:33:56.765 12:11:50 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:33:56.765 12:11:50 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:56.765 12:11:50 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:56.765 12:11:50 -- nvmf/common.sh@627 -- # local block nvme 00:33:56.765 12:11:50 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:33:56.765 12:11:50 -- nvmf/common.sh@630 -- # modprobe nvmet 00:33:56.765 12:11:50 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:56.765 12:11:50 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:00.065 Waiting for block devices as requested 00:34:00.065 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:00.065 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:00.326 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:00.326 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:00.326 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:00.586 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:00.586 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:00.586 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:00.847 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:00.847 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:00.847 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:01.108 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:01.108 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:01.108 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:01.108 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:01.374 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:01.374 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:01.374 12:11:55 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:34:01.374 12:11:55 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:01.374 12:11:55 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:34:01.374 12:11:55 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:34:01.374 12:11:55 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:01.374 No valid GPT data, bailing 00:34:01.374 12:11:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:01.374 12:11:55 -- scripts/common.sh@393 -- # pt= 00:34:01.374 12:11:55 -- scripts/common.sh@394 -- # return 1 00:34:01.374 12:11:55 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:34:01.374 12:11:55 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:34:01.374 12:11:55 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:01.374 12:11:55 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:01.375 12:11:55 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:01.375 12:11:55 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:34:01.375 12:11:55 -- nvmf/common.sh@654 -- # echo 1 00:34:01.375 12:11:55 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:34:01.375 12:11:55 -- nvmf/common.sh@656 -- # echo 1 00:34:01.375 12:11:55 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:34:01.375 12:11:55 -- nvmf/common.sh@663 -- # echo tcp 00:34:01.375 12:11:55 -- nvmf/common.sh@664 -- # echo 4420 00:34:01.375 12:11:55 -- nvmf/common.sh@665 -- # echo ipv4 00:34:01.375 12:11:55 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:01.375 12:11:55 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:34:01.375 00:34:01.375 Discovery Log Number of Records 2, Generation counter 2 00:34:01.375 =====Discovery Log Entry 0====== 00:34:01.375 trtype: tcp 00:34:01.375 adrfam: ipv4 00:34:01.375 subtype: current discovery subsystem 00:34:01.375 treq: not specified, sq flow control disable supported 00:34:01.375 portid: 1 00:34:01.375 trsvcid: 4420 00:34:01.375 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:01.375 traddr: 10.0.0.1 00:34:01.375 eflags: none 00:34:01.375 sectype: none 00:34:01.375 =====Discovery Log Entry 1====== 00:34:01.375 trtype: tcp 00:34:01.375 adrfam: ipv4 00:34:01.375 subtype: nvme subsystem 00:34:01.375 treq: not specified, sq flow control disable supported 00:34:01.375 portid: 1 00:34:01.375 trsvcid: 4420 00:34:01.375 subnqn: kernel_target 00:34:01.375 traddr: 10.0.0.1 00:34:01.375 eflags: none 00:34:01.375 sectype: none 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:01.375 12:11:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:01.642 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.945 Initializing NVMe Controllers 00:34:04.945 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:04.945 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:04.945 Initialization complete. Launching workers. 00:34:04.945 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 56199, failed: 0 00:34:04.945 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 56199, failed to submit 0 00:34:04.945 success 0, unsuccess 56199, failed 0 00:34:04.945 12:11:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:04.945 12:11:58 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:04.945 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.246 Initializing NVMe Controllers 00:34:08.246 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:08.246 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:08.246 Initialization complete. Launching workers. 00:34:08.246 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 97321, failed: 0 00:34:08.246 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 24526, failed to submit 72795 00:34:08.246 success 0, unsuccess 24526, failed 0 00:34:08.246 12:12:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:08.246 12:12:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:08.246 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.788 Initializing NVMe Controllers 00:34:10.788 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:10.788 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:10.788 Initialization complete. Launching workers. 00:34:10.788 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 93458, failed: 0 00:34:10.788 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 23366, failed to submit 70092 00:34:10.788 success 0, unsuccess 23366, failed 0 00:34:10.788 12:12:04 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:34:10.788 12:12:04 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:34:10.788 12:12:04 -- nvmf/common.sh@677 -- # echo 0 00:34:10.788 12:12:04 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:34:10.788 12:12:04 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:10.788 12:12:04 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:10.788 12:12:04 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:10.788 12:12:04 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:34:10.788 12:12:04 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:34:10.788 00:34:10.788 real 0m14.002s 00:34:10.788 user 0m7.206s 00:34:10.788 sys 0m3.575s 00:34:10.788 12:12:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.788 12:12:04 -- common/autotest_common.sh@10 -- # set +x 00:34:10.788 ************************************ 00:34:10.788 END TEST kernel_target_abort 00:34:10.788 ************************************ 00:34:10.788 12:12:04 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:34:10.788 12:12:04 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:34:10.788 12:12:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:10.788 12:12:04 -- nvmf/common.sh@116 -- # sync 00:34:10.788 12:12:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:10.788 12:12:04 -- nvmf/common.sh@119 -- # set +e 00:34:10.788 12:12:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:10.788 12:12:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:10.788 rmmod nvme_tcp 00:34:10.788 rmmod nvme_fabrics 00:34:10.788 rmmod nvme_keyring 00:34:10.788 12:12:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:10.788 12:12:04 -- nvmf/common.sh@123 -- # set -e 00:34:10.788 12:12:04 -- nvmf/common.sh@124 -- # return 0 00:34:10.788 12:12:04 -- nvmf/common.sh@477 -- # '[' -n 2192724 ']' 00:34:10.788 12:12:04 -- nvmf/common.sh@478 -- # killprocess 2192724 00:34:10.788 12:12:04 -- common/autotest_common.sh@926 -- # '[' -z 2192724 ']' 00:34:10.788 12:12:04 -- common/autotest_common.sh@930 -- # kill -0 2192724 00:34:10.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2192724) - No such process 00:34:10.788 12:12:04 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2192724 is not found' 00:34:10.788 Process with pid 2192724 is not found 00:34:10.788 12:12:04 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:10.788 12:12:04 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:15.071 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:15.071 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:15.071 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:15.071 12:12:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:15.071 12:12:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:15.071 12:12:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:15.071 12:12:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:15.071 12:12:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.071 12:12:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:15.071 12:12:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.984 12:12:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:16.984 00:34:16.984 real 0m44.265s 00:34:16.984 user 1m1.277s 00:34:16.984 sys 0m16.036s 00:34:16.984 12:12:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:16.984 12:12:10 -- common/autotest_common.sh@10 -- # set +x 00:34:16.984 ************************************ 00:34:16.984 END TEST nvmf_abort_qd_sizes 00:34:16.984 ************************************ 00:34:16.984 12:12:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:16.984 12:12:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:16.984 12:12:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:16.984 12:12:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:16.984 12:12:10 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:34:16.984 12:12:10 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:34:16.984 12:12:10 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:34:16.984 12:12:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:16.984 12:12:10 -- common/autotest_common.sh@10 -- # set +x 00:34:16.984 12:12:10 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:34:16.984 12:12:10 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:34:16.984 12:12:10 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:34:16.984 12:12:10 -- common/autotest_common.sh@10 -- # set +x 00:34:25.128 INFO: APP EXITING 00:34:25.128 INFO: killing all VMs 00:34:25.128 INFO: killing vhost app 00:34:25.128 INFO: EXIT DONE 00:34:27.677 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:27.677 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:27.677 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:31.887 Cleaning 00:34:31.887 Removing: /var/run/dpdk/spdk0/config 00:34:31.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:31.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:31.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:31.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:31.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:31.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:31.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:31.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:31.887 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:31.887 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:31.887 Removing: /var/run/dpdk/spdk1/config 00:34:31.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:31.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:31.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:31.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:31.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:31.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:31.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:31.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:31.887 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:31.887 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:31.887 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:31.887 Removing: /var/run/dpdk/spdk2/config 00:34:31.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:31.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:31.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:31.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:31.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:31.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:31.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:31.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:31.887 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:31.887 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:31.887 Removing: /var/run/dpdk/spdk3/config 00:34:31.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:31.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:31.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:31.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:31.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:31.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:31.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:31.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:31.887 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:31.887 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:31.887 Removing: /var/run/dpdk/spdk4/config 00:34:31.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:31.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:31.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:31.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:31.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:31.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:31.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:31.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:31.887 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:31.887 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:31.887 Removing: /dev/shm/bdev_svc_trace.1 00:34:31.887 Removing: /dev/shm/nvmf_trace.0 00:34:31.887 Removing: /dev/shm/spdk_tgt_trace.pid1727815 00:34:31.887 Removing: /var/run/dpdk/spdk0 00:34:31.887 Removing: /var/run/dpdk/spdk1 00:34:31.887 Removing: /var/run/dpdk/spdk2 00:34:31.887 Removing: /var/run/dpdk/spdk3 00:34:31.887 Removing: /var/run/dpdk/spdk4 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1726297 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1727815 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1728653 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1729681 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1730262 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1730643 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1731030 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1731433 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1731799 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1731930 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1732212 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1732593 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1733996 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1737282 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1737647 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1738018 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1738116 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1738729 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1738742 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1739235 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1739458 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1739820 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1739837 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1740199 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1740213 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1740704 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1741004 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1741398 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1741765 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1741785 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1741847 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1742181 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1742532 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1742670 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1742922 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1743257 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1743608 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1743837 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1744005 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1744316 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1744668 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1744977 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1745149 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1745379 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1745736 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1746071 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1746283 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1746445 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1746796 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1747132 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1747456 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1747577 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1747859 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1748195 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1748546 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1748743 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1748930 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1749253 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1749602 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1749895 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1750122 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1750420 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1750777 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1751114 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1751327 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1751500 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1751874 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1752506 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1752977 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1753142 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1753360 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1753636 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1753938 00:34:31.887 Removing: /var/run/dpdk/spdk_pid1758381 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1856652 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1861772 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1873612 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1880290 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1885188 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1885880 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1896492 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1896851 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1902537 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1909464 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1912500 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1924848 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1935711 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1937780 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1939042 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1960095 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1964693 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1970180 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1972206 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1974276 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1974592 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1974906 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1974981 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1975678 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1978037 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1979010 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1979527 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1986354 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1992920 00:34:31.888 Removing: /var/run/dpdk/spdk_pid1998888 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2044690 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2049730 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2057200 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2058710 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2060293 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2065456 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2070548 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2079705 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2079803 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2084615 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2084950 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2085280 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2085627 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2085668 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2087008 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2089036 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2091062 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2093086 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2095106 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2097128 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2105104 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2105677 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2106884 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2108085 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2114386 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2117741 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2124225 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2131179 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2137829 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2138643 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2139437 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2140123 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2141192 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2141884 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2142578 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2143269 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2148402 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2148782 00:34:31.888 Removing: /var/run/dpdk/spdk_pid2156427 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2156759 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2159341 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2166892 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2166898 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2172889 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2175107 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2177614 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2178850 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2181400 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2182855 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2192859 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2193436 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2194106 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2197079 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2197528 00:34:32.149 Removing: /var/run/dpdk/spdk_pid2198209 00:34:32.149 Clean 00:34:32.149 killing process with pid 1669979 00:34:42.156 killing process with pid 1669976 00:34:42.156 killing process with pid 1669978 00:34:42.156 killing process with pid 1669977 00:34:42.156 12:12:35 -- common/autotest_common.sh@1436 -- # return 0 00:34:42.156 12:12:35 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:34:42.156 12:12:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:42.156 12:12:35 -- common/autotest_common.sh@10 -- # set +x 00:34:42.156 12:12:35 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:34:42.156 12:12:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:42.156 12:12:35 -- common/autotest_common.sh@10 -- # set +x 00:34:42.156 12:12:35 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:42.156 12:12:35 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:42.156 12:12:35 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:42.156 12:12:35 -- spdk/autotest.sh@394 -- # hash lcov 00:34:42.156 12:12:35 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:42.156 12:12:35 -- spdk/autotest.sh@396 -- # hostname 00:34:42.156 12:12:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:42.156 geninfo: WARNING: invalid characters removed from testname! 00:35:04.119 12:12:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:06.664 12:13:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:08.048 12:13:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:09.961 12:13:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:11.875 12:13:05 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:12.817 12:13:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:14.855 12:13:08 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:14.855 12:13:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.855 12:13:08 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:14.855 12:13:08 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.855 12:13:08 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.855 12:13:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.855 12:13:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.855 12:13:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.855 12:13:08 -- paths/export.sh@5 -- $ export PATH 00:35:14.855 12:13:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.855 12:13:08 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:14.855 12:13:08 -- common/autobuild_common.sh@435 -- $ date +%s 00:35:14.855 12:13:08 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718014388.XXXXXX 00:35:14.855 12:13:08 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718014388.GtikSo 00:35:14.855 12:13:08 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:35:14.855 12:13:08 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:35:14.855 12:13:08 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:14.855 12:13:08 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:14.855 12:13:08 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:14.855 12:13:08 -- common/autobuild_common.sh@451 -- $ get_config_params 00:35:14.855 12:13:08 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:35:14.855 12:13:08 -- common/autotest_common.sh@10 -- $ set +x 00:35:14.855 12:13:08 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:35:14.855 12:13:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:35:14.855 12:13:08 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:14.855 12:13:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:14.855 12:13:08 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:14.855 12:13:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:14.855 12:13:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:14.855 12:13:08 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:14.855 12:13:08 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:14.855 12:13:08 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:14.855 12:13:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:14.855 + [[ -n 1627649 ]] 00:35:14.855 + sudo kill 1627649 00:35:14.867 [Pipeline] } 00:35:14.881 [Pipeline] // stage 00:35:14.885 [Pipeline] } 00:35:14.898 [Pipeline] // timeout 00:35:14.902 [Pipeline] } 00:35:14.915 [Pipeline] // catchError 00:35:14.919 [Pipeline] } 00:35:14.932 [Pipeline] // wrap 00:35:14.939 [Pipeline] } 00:35:14.951 [Pipeline] // catchError 00:35:14.960 [Pipeline] stage 00:35:14.963 [Pipeline] { (Epilogue) 00:35:14.977 [Pipeline] catchError 00:35:14.979 [Pipeline] { 00:35:14.994 [Pipeline] echo 00:35:14.996 Cleanup processes 00:35:15.002 [Pipeline] sh 00:35:15.292 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:15.292 2215000 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:15.307 [Pipeline] sh 00:35:15.594 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:15.594 ++ grep -v 'sudo pgrep' 00:35:15.594 ++ awk '{print $1}' 00:35:15.594 + sudo kill -9 00:35:15.594 + true 00:35:15.607 [Pipeline] sh 00:35:15.894 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:28.155 [Pipeline] sh 00:35:28.443 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:28.443 Artifacts sizes are good 00:35:28.458 [Pipeline] archiveArtifacts 00:35:28.467 Archiving artifacts 00:35:28.715 [Pipeline] sh 00:35:29.001 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:29.019 [Pipeline] cleanWs 00:35:29.031 [WS-CLEANUP] Deleting project workspace... 00:35:29.031 [WS-CLEANUP] Deferred wipeout is used... 00:35:29.039 [WS-CLEANUP] done 00:35:29.041 [Pipeline] } 00:35:29.067 [Pipeline] // catchError 00:35:29.080 [Pipeline] sh 00:35:29.403 + logger -p user.info -t JENKINS-CI 00:35:29.413 [Pipeline] } 00:35:29.433 [Pipeline] // stage 00:35:29.443 [Pipeline] } 00:35:29.460 [Pipeline] // node 00:35:29.467 [Pipeline] End of Pipeline 00:35:29.505 Finished: SUCCESS